BackupPC-users

Re: [BackupPC-users] Backing up a BackupPC host - *using rsync+tarPCCopy*

2009-09-28 11:12:44
Subject: Re: [BackupPC-users] Backing up a BackupPC host - *using rsync+tarPCCopy*
From: Fernando Laudares Camargos <fernando.laudares AT revolutionlinux DOT com>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Mon, 28 Sep 2009 11:09:15 -0400
Hello Holger and Les,

Holger Parplies a écrit :
> Hi,
> 
> [can we agree on avoiding tabs in subject lines?]
> 
> Les Mikesell wrote on 2009-09-25 23:25:35 -0500 [Re: [BackupPC-users] Backing 
> up a BackupPC host - *using rsync+tarPCCopy*]:
>> Fernando Laudares Camargos wrote:
>>> [...]
>>> I'm doing two things (altough I'm not sure that answer your question 
>>> correctly):
>>>
>>> 1) rsync of cpool without --delete (so, cpool will keep growing, no files 
>>> will ever be deleted. I assume that's fine apart from the fact it will take 
>>> more disk space).
>> BackupPC_nightly may rename chains of hash collisions in cpool as part of 
>> its 
>> cleanup.  If such a rename occurs between the rsync runs and the 
>> BackupPC_tarPCCopy or restore, you'll end up with links to the wrong files.

I wasn't aware of the fact BackupPC_nightly renames chains of hash collisions 
in cpool so, indeed, it's not as harmless as I first thought ...

> actually, I don't believe you even need that to happen for problems to occur.
> 
> As far as an rsync pool update is concerned, the contents of some pool files
> will have changed if a chain gets renumbered. rsync has no concept of renamed
> files, and even if it did, from looking at the pool alone it couldn't know
> what to do (because that depends on the other links pointing to the file).

Ok, one more point to consider in the approach I'm using ...

> If you are using --inplace, I believe the destination pool files will be
> overwritten, thereby making *previously existing links to them* point to
> incorrect content. You're probably not doing that, so you will probably "only"
> have the pool file deleted and replaced with a new one with new contents. As a
> result, the existing links in the pc/ directories will no longer take part in
> pooling in your copy. You'll have a new independant copy of the contents under
> the new pool file name which subsequent backups might link to (providing it's
> not renamed again). I really don't see you gaining anything from running rsync
> *without* --delete. With --delete, you could at least expire backups from your
> copy (i.e. pc/host/num/ trees) and get back some space (well, more space,
> really, because you get back some space from files severed from pooling by
> chain renumbering as described above).

I'm not using --inplace and I see your (valid) point for using --delete.

> What exactly are you trying to do, anyway?
> 
> 1. Have a copy of the pool that BackupPC could run on if the original pool is
>    lost, or
> 2. have a copy of the pool suitable for *restoring files only* if the original
>    pool is lost, or
> 3. something else?
> 
> You're not achieving (1), though (2) would probably work.

What I'm trying to do is to have 2). Actually, for what I have read in this 
list, the desire to have a backup of the data in the main BackupPC server is 
common among many users. To have two independent backup servers located in 
different sites would place double load in the clients and sometimes that is 
not feasible (if the backup takes all night to conclude, for example) as oposed 
to concentrate the load of the secondary backup in the main backup server.

So, to get back to your question, what we're trying to accomplish is to have a 
synchronized copy of the data (cpool + backup sets of pcs) in the main BackupPC 
in a separate server. If we lost the main server we would like to do both:
1) be able to restore files
2) start using the secondary server to make the Backups until we can recover 
the main server

The situation described bellow could be accomplish using DRBD+Heartbeat when 
you have a really good network connection between the primary and the secondary 
backup servers, which is not our case most of the time.

In fact, if we could garantee all files are in the cpool and we could have a 
way to identify then in the repository (using a database to relate the md5sum 
to a file name, for instance), that could solve part of the problem. We would 
"only" need to rsync the cpool and, in case of a disaster, we could at least 
manually recover the essential files. It's not a complete solution but one that 
would fit well in some cases.

I'm going to try Jeffrey's script to re-execute linking today and see how that 
modifies the size of the tar files created with BackupPC_tarPCCopy.

> How much "more disk space" have you got for your copy?

Not that much more, around 15%, but then the system has not been used long 
enough and this rate will surelly become more important with time.

I'm glad we're taking the time to discuss that again, I'm sure it will benefit 
a lot of people using this great software that is BackupPC.

Regards,
-- 
Fernando Laudares Camargos

      Révolution Linux
http://www.revolutionlinux.com
---------------------------------------
* Tout opinion et prise de position exprimée dans ce message est celle
de son auteur et pas nécessairement celle de Révolution Linux.
** Any views and opinion presented in this e-mail are solely those of
the author and do not necessarily represent those of Révolution Linux.


> 
> Regards,
> Holger
> 
> ------------------------------------------------------------------------------
> Come build with us! The BlackBerry&reg; Developer Conference in SF, CA
> is the only developer event you need to attend this year. Jumpstart your
> developing skills, take BlackBerry mobile applications to market and stay 
> ahead of the curve. Join us from November 9&#45;12, 2009. Register now&#33;
> http://p.sf.net/sfu/devconf
> _______________________________________________
> BackupPC-users mailing list
> BackupPC-users AT lists.sourceforge DOT net
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/


------------------------------------------------------------------------------
Come build with us! The BlackBerry&reg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9&#45;12, 2009. Register now&#33;
http://p.sf.net/sfu/devconf
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/