BackupPC-users

Re: [BackupPC-users] An idea to fix both SIGPIPE and memory issues with rsync

2009-12-14 14:10:58
Subject: Re: [BackupPC-users] An idea to fix both SIGPIPE and memory issues with rsync
From: "Jeffrey J. Kosowsky" <backuppc AT kosowsky DOT org>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Mon, 14 Dec 2009 14:08:31 -0500
Robin Lee Powell wrote at about 10:10:17 -0800 on Monday, December 14, 2009:
 > Do you actually see a *problem* with it, or are you just assuming it
 > won't work because it seems too easy?

The problem I see is that backuppc won't be able to backup hard links
on any interrupted or sub-divided backup unless you are careful to
make sure that no hard links span multiple restarts. And once you mess
up hard links for a file, all subsequent incremental will be unlinked
to.


If you are just using BackupPC to back up data then that might not be
important. On the other hand, if you are using backuppc to backup
entire systems with the goal of having (close to a) bare metal
restore, then this method won't work.

Personally, I haven't seen a major memory sink using rsync
3.0+. Perhaps you could provide some real world data of the
potential savings so that people can understand the tradeoffs.

That being said, memory is pretty cheap, while reliable backups are
hard. So, I wouldn't expect Craig to integrate functionality that
would degrade the ability to reliably back up a *nix filesystem just
to save a little memory. Of course, none of this is meant to
discourage your own patches or forks if they suit your needs.

As an aside, if anything, myself and others have been pushing to get
more reliable backup of filesystem details such as extended
attributes, ACLs, ntfs stuff etc. and removing the ability to backup
hard links would be a step backwards from that perspective.

Finally, the problem with interrupted backups that I see mentioned
most on this group is the interruption of large transfers that have to
be restarted and then retransferred over a slow link. Rsync itself is
pretty fast when it just has to check file attributes to determine
what needs to be backed up. So, I think the best way for improvement
that would be consistent with BackupPC design would be to store
partial file transfers so that they could be resumed on
interruption. Also, people have suggested tweaks to the algorithm for
storing partial backups. I suspect that a little effort in those
directions would solve most problems with few if any drawbacks. Again,
I really haven't seen people mentioning memory issues per-se in the
normal backuppc context -- the memory issue seems to mostly come up
when people are using rsync (outside of backuppc) to duplicate the
pool/pc trees and its large number of hard links.


------------------------------------------------------------------------------
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

<Prev in Thread] Current Thread [Next in Thread>