BackupPC-users

Re: [BackupPC-users] Moving lots of data on a client

2013-08-20 14:56:21
Subject: Re: [BackupPC-users] Moving lots of data on a client
From: Les Mikesell <lesmikesell AT gmail DOT com>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Tue, 20 Aug 2013 13:54:58 -0500
On Tue, Aug 20, 2013 at 1:23 PM, Raman Gupta <rocketraman AT gmail DOT com> 
wrote:
> I have a client on which about 100 GB of data has been moved from one
> directory to another -- otherwise its exactly the same.
>
> As I understand it, since the data has been moved, BackupPC 3 will
> transfer all the data again (and discard it once it realizes the data
> is already in the pool) i.e. it does not skip the transfer of each
> file even though the checksum is identical to an existing file in the
> pool.
>
> I am using the rsync transfer method.
>
> Is there a workaround to prevent all 100 GB of data from being
> transferred again?

You should be able to do a matching move in the latest full backup
tree under the pc directory for that host if you understand the
filename mangling (precede with 'f', and URI-encode %, /, etc.), then
force a full backup.  Note that this will break your ability to
restore back to the old location for the altered full and its
subsequent increments.

-- 
   Les Mikesell
     lesmikesell AT gmail DOT com

------------------------------------------------------------------------------
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/