BackupPC-users

Re: [BackupPC-users] Full/Incremental disbale

2010-10-28 12:32:37
Subject: Re: [BackupPC-users] Full/Incremental disbale
From: Les Mikesell <lesmikesell AT gmail DOT com>
To: backuppc-users AT lists.sourceforge DOT net
Date: Thu, 28 Oct 2010 11:30:39 -0500
On 10/28/2010 10:51 AM, thigginson wrote:
> Thank you both for your help :)
>
> Les are you saying that with rsync the full backup does not transfer files 
> that match exsisting via checksum in previous backups? This could be quite 
> usefull for us.

Yes, that's pretty much the point of rsync.  Plus it will detect 
changed/extended files and only transfer the differing blocks. The way 
backuppc does it, incremental runs quickly skip files where the file 
name, length, and timestamps match your existing copy and full runs add 
the --ignore-timestamp option so a block checksum comparison is run over 
the data.  If you have the checksum-seed option enabled in backuppc it 
will use cached checksums on the server side to save the work of 
uncompressing and recomputing, but the data is still read on the client 
side.

> Our zips are scriped to be incremental mid week and of course a full copy at 
> the weekends, its the only way we can manage the huge amounts of data. Its 
> also very convinient to copy the zip file across the local network (Our 
> backuppc server is for offsite backups so has serious bandwith limitations if 
> we have to do a full restore remotley)
>
> If this was your network how would you manage it? Have the scripts zip up our 
> data and make copies on the network. As well as having backup pc running via 
> rsync with compression for each of the appropriate folders. I take it rsync 
> compresses before it sends?

It would depend on the nature of the data but my first choice would be 
to run separate instances of backuppc - one local and one remote if you 
have a big enough backup window to complete the runs with the blackout 
intervals set so they don't hit at the same time.  Next best would be to 
rsync locally to an uncompressed snapshot, then back that up from the 
remote backuppc.  I might continue to zip parts that need the extra 
password protection.

Native rsync knows how to compress on the fly, but the backuppc 
implementation does not - although it will compress for storage and will 
only keep one copy of any file with identical contents.  You can work 
around the transfer compression by adding the -C option to ssh when 
running rsync over ssh, or if you are using a vpn for access to the 
remote site, the vpn may have a compression option (like lzo in openvpn).

But, there will really only be a big win at this if you are copying a 
lot of unchanging data into the zips or if there are several machines 
with duplicated files.  If the files always change, backuppc will always 
reconstruct and store a new copy even if it only transfers the changed 
blocks.  If you are happy with what you have, keep it except for running 
backuppc with rsync or rsyncd and doing full runs moderately often.

-- 
   Les Mikesell
    lesmikesell AT gmail DOT com

------------------------------------------------------------------------------
Nokia and AT&T present the 2010 Calling All Innovators-North America contest
Create new apps & games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

<Prev in Thread] Current Thread [Next in Thread>