BackupPC-users

Re: [BackupPC-users] Subject: Re: distribution packages and tar upgrades + full backups

2010-09-11 14:21:20
Subject: Re: [BackupPC-users] Subject: Re: distribution packages and tar upgrades + full backups
From: Les Mikesell <lesmikesell AT gmail DOT com>
To: backuppc-users AT lists.sourceforge DOT net
Date: Sat, 11 Sep 2010 13:19:08 -0500
On 9/11/10 11:38 AM, Timothy Omer wrote:
>
> So when an Incremental runs, if the last backup is a Full backup files
> that are the same never get transferred to the server.

Yes, and in an incremental run, only the filename, timestamp, and length are 
compared.  In a full run, a block checksum comparison is done, but the contents 
are not transferred except for any differences found.

> If the last
> backup was Incremental then any files that have not changed since the
> last Incremental will be transferred to the server, but then dropped.

If you mean files that weren't in the previous full, that's right - they are 
transferred again in each incremental.

> Therefore you still get the space saving as the files will be hard
> linked, but you will not save in bandwidth.

Yes, but you can use incremental levels to change this behavior and have the 
server merge the incremental and backing full for the comparison.  This takes 
some extra work on the server side but may be worth it if bandwidth is 
restricted.

> As you said, just doing full backups will mean I only transfer data
> that changes. One of the benefits of having multiple traditional full
> backups is that that you can have the same file backed up twice, that
> covers you if one gets corrupted - but as BackupPC hard links all
> duplicate files that will never be the case (sorry if that sounds
> silly, but just confirming)

This is the point of the full runs setting the --ignore-times option for rsync. 
  This forces a block checksum comparison to detect changes even if the name, 
timestamp, and length are the same.  And the pooling is based on the file 
content with hash collision management.  If anything changes in the content you 
will get a new entry in the pool.  If you turn on checksum caching, the server 
side file is not read every time but a small percentage of the time it will be 
read again as a test for corruption.

-- 
     Les Mikesell
       lesmikesell AT gmail DOT com

------------------------------------------------------------------------------
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing
http://p.sf.net/sfu/novell-sfdev2dev
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/