BackupPC-users

Re: [BackupPC-users] Incremental Seems To Backup Whole System

2010-03-03 12:55:22
Subject: Re: [BackupPC-users] Incremental Seems To Backup Whole System
From: Les Mikesell <lesmikesell AT gmail DOT com>
To: backuppc-users AT lists.sourceforge DOT net
Date: Wed, 03 Mar 2010 11:53:38 -0600
On 3/3/2010 8:37 AM, Mike Bydalek wrote:
> I hate bring this up again, but after taking advice from Les and John,
> I'm not seeing what I think I should be seeing.  After changing my
> current config to the one below, I started to have incr, incr, full,
> incr, incr, full, but the full's were doing the entire 600G.
>
> Here's what I have for my Host Schedule:
> XferMethod: rsync
> FullPeriod: 1.97
> FullKeepCnt: 12
> FullAgeMax: 13.5
> IncrPeriod: 0.49
> IncrKeepCnt: 28
> IncrLevels: 1,2,3,4
>
> What was odd was the 0 full backup wouldn't go away, even after 20
> days, so I decided to just completely wipe the pool and start over.

Fulls aren't deleted as long as any subsequent incremental depends on 
them or if they are needed to meet the FullKeepCnt or FullKeepCntMin values.


> Here's the last few backup numbers:
>
> Type  Level  Files  Duration
> full  0      1006837  2535
> incr  1      1618    69
>
> The system is currently backing up and has been since 3/2 16:00, so
> it's doing another full.  This is telling me that the entire server is
> backed up on every full.

It reads all the files on fulls.  This may take a long time for large 
filesystems.

> If I move this offsite, it's going to
> re-transfer the entire system over, which is what I *can't* have as
> it'll take way too long to backup this much data.

It should only be transmitting the differences.

> Am I missing something here or doing something wrong?  I would have
> thought that the diffs between the last increment and the current
> backup would be "merged" somehow to create the latest full.

No, the comparison is against the last full, so the differences copied 
during incrementals are copied again in the subsequent full.

> I'm running version 3.2.0beta1 as well.  Thanks for any assistance!

The directory structure is read and transmitted in full before the 
comparison starts, so if you have a very large number of files you may 
need a large amount of RAM to keep this from going to swap and becoming 
very slow.  If there are logical subdirectory boundaries you could break 
the filesystem into separate runs.  You can take that a step further by 
setting up what appear to be different hosts using the ClientAlias 
setting to point them back to the same target - then you can set up 
different schedules for the runs and skew the fulls.

-- 
   Les Mikesell
    lesmikesell AT gmail DOT com

------------------------------------------------------------------------------
Download Intel&#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

<Prev in Thread] Current Thread [Next in Thread>