BackupPC-users

Re: [BackupPC-users] When do files get re-transferred?

2011-12-24 21:51:17
Subject: Re: [BackupPC-users] When do files get re-transferred?
From: Rahul Amaram <rahul AT synovel DOT com>
To: backuppc-users AT lists.sourceforge DOT net
Date: Sun, 25 Dec 2011 08:18:41 +0530
Thanks for the responses. I am slightly lost in all the technical 
discussion. My requirement is simple. I do backups over a WAN using 
rsync over SSH. I have over 100 GB of files to be synced over the WAN. 
Transferring a couple of GB of data over WAN is fine but 100 GB of data 
might take a really long time.

Generally when transferring data using rsync, it compares the remote 
files with the local files using some checksum algorithm and transfers 
it only when they are different. From what I know, comparison using the 
checksum uses only a fraction of the bandwidth required to transfer the 
whole file. So while Backuppc performs a full backup, does it compare it 
with the local file stored during previous full backup? Or just blindly 
copy the entire file.

Also, from your response, it seems that going with more frequent 
incremental backups is what you suggest. However is there any downside 
to this? For instance, let us say the full-backup is about 6 months old, 
and some file in it gets corrupted. Then will this be identified by the 
incremental backups?

Thanks,
Rahul.

On Saturday 24 December 2011 08:18 AM, hansbkk AT gmail DOT com wrote:
> On Sat, Dec 24, 2011 at 9:34 AM, Les Mikesell<lesmikesell AT gmail DOT com>  
> wrote:
>>> Thanks Les. So my snip above does hold when trying to conserve
>>> bandwidth (say over a WAN), but at the potential cost of increasing
>>> the time the backup session requires. In a high-speed local
>>> environment, processing time can be reduced by always using
>>> "differential" between fulls (by not enabling the "incremental"
>>> option).
>>>
>>> This only becomes a question if I got it wrong 8-)
>> The more significant difference may be the wall-clock time time for a
>> full rsync run, which always does a full read of all the data on the
>> remote side for a block checksum comparison, and may need to
>> read/uncompress on the server side.   If that isn't an issue you can
>> just do frequent fulls and not worry about doing rsyncs against
>> incremental levels.   If it is an issue, or you want to use the least
>> bandwidth possible, then you might use incremental levels and less
>> frequent fulls.
> Yes, in my current usage, I've only been doing fulls since figuring
> out it didn't impact storage space usage. I just wanted to clarify
> understanding the trade-offs between the "other flavors" for future
> reference in possible other contexts.
>
> ------------------------------------------------------------------------------
> Write once. Port to many.
> Get the SDK and tools to simplify cross-platform app development. Create
> new or port existing apps to sell to consumers worldwide. Explore the
> Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
> http://p.sf.net/sfu/intel-appdev
> _______________________________________________
> BackupPC-users mailing list
> BackupPC-users AT lists.sourceforge DOT net
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/

------------------------------------------------------------------------------
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/