BackupPC-users

Re: [BackupPC-users] Backing up a BackupPC host - *using rsync+tarPCCopy*

2009-09-25 09:17:54
Subject: Re: [BackupPC-users] Backing up a BackupPC host - *using rsync+tarPCCopy*
From: Fernando Laudares Camargos <fernando.laudares AT revolutionlinux DOT com>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Fri, 25 Sep 2009 08:56:35 -0400
Hello,

I beg your pardon for bringing this topic to the table again. I have read this 
entire thread (even the scientific debate around thrusting probabilities 
calculus) and have tried the main suggestions detailed by you to solve this 
problem (except the use of raid mirror detailed by Les Mikesel, which does not 
fit our needs).

What worked best for us was the strategy of copying the 'cpool' with an 
standard rsync (v.3) and then use BackupPC_tarPCCopy to re-create the sets of 
backups. I have even refined that approach creating a script to break down the 
rsync of the 'cpool' in multiple rsyncs (per sub-directory) and to "smartly" 
run BackupPC_tarPCCopy for only the new backup sets of each pc.

This approach works best in the environments we use Coraid boxes and that we 
can mount both the regular BackupPC partition and the backup partition in the 
same server, so as to BackupPC_tarPCCopy directly to the backup partition. In 
the other cases we need to create a tar file with BackupPC_tarPCCopy, copy it 
to the backup server over the network, and then untar the file - that adds a 
new level of complexity to the solution.

Anyway, the first step in this approach (rsync of cpool) works considerably 
fine (for cpools containing a few Terabytes of data). What doesn't always works 
fine is that BackupPC_tarPCCopy sometimes produces tar files that are too big 
(several Gb, as for the backup of Zimbra and database servers), which brings to 
the following question:

    * Why BackupPC_tarPCCopy sometimes produces big tar files ? *

For what I understand, after the completion of a backup, BackupPC_link is run 
to transfer all the non-BackupPC-system-files (e.g., attrib files) to the 
'cpool', replacing then by hard links. A tar file originated from a 'linked' 
backup set would then contain mainly a list of relations between the files that 
compose the data set and their relative position in the cpool, so we can use 
the same tool to untar the file and re-create the data set with hard links. 
Which brings to my second and last question:

    * Is this (BackupPC_tarPCCopy creating big files) happening because the 
interval between the execution of my backups (the end of one and the start of 
the next one) is not giving enough time to BackupPC to run BackupPC_link and 
finally the files in the data sets are not being 'linked' in the cpool ? *

Ideally I would like to have all files in the cpool so the rsync of this 
directory would make for most of the trouble. If the second question is true 
then it would be great to have a way to manually run BackupPC_link to "clean" 
the backup sets.

I appreciate your view in those two questions.

Regards,
-- 
Fernando Laudares Camargos

      Révolution Linux
http://www.revolutionlinux.com
---------------------------------------
* Tout opinion et prise de position exprimée dans ce message est celle
de son auteur et pas nécessairement celle de Révolution Linux.
** Any views and opinion presented in this e-mail are solely those of
the author and do not necessarily represent those of Révolution Linux.


Peter Walter a écrit :
> All,
> 
> I have implemented backuppc on a Linux server in my mixed OSX / Windows 
> / Linux environment for several months now, and I am very happy with the 
> results. For additional disaster recovery protection, I am considering 
> implementing an off-site backup of the backuppc server using rsync to 
> synchronize the backup pool to a remote server. However, I have heard 
> that in a previous release of backuppc, rsyncing to another server did 
> not work because backuppc kept changing the file and directory names in 
> the backup pool, leading the remote rsync server to having to 
> re-transfer the entire backup pool (because it thinks the renamed files 
> are new files).
> 
> I have searched the wiki and the mailing list and can't find any 
> discussion of this topic. Can anyone confirm that the way backuppc 
> manages the files and directories in the backup pool would make it 
> difficult to rsync to another server, and, if so, can anyone suggest a 
> method for "mirroring" the backuppc server at an offsite backup machine?
> 
> Regards,
> Peter
> 
> ------------------------------------------------------------------------------
> Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
> -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
> -Strategies to boost innovation and cut costs with open source participation
> -Receive a $600 discount off the registration fee with the source code: SFAD
> http://p.sf.net/sfu/XcvMzF8H
> _______________________________________________
> BackupPC-users mailing list
> BackupPC-users AT lists.sourceforge DOT net
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/


------------------------------------------------------------------------------
Come build with us! The BlackBerry&reg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9&#45;12, 2009. Register now&#33;
http://p.sf.net/sfu/devconf
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/