BackupPC-users

Re: [BackupPC-users] 38GB file backup is hanging backuppc (more info and more questions)

2009-03-02 14:40:08
Subject: Re: [BackupPC-users] 38GB file backup is hanging backuppc (more info and more questions)
From: John Rouillard <rouilj-backuppc AT renesys DOT com>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Mon, 2 Mar 2009 18:20:00 +0000
On Mon, Mar 02, 2009 at 08:45:22AM +0200, Brad C wrote:
> Hi John,
> 
> On Mon, Feb 16, 2009 at 8:42 PM, John Rouillard <rouilj-backuppc AT renesys 
> DOT com
> > wrote:
> 
> > Hi Craig:
> >
> > On Thu, Feb 12, 2009 at 11:24:28PM -0800, Craig Barratt wrote:
> > > Tony writes:
> > > > I missed the original post, but  I run rsync with the --whole-file
> > > > option, but I still get RStmp files, is that not supposed to happen?
> > >
> > > RStmp is a temporary file used to store the uncompressed pool file,
> > > which is needed for the rsync algorithm.  It's only used for larger
> > > files - smaller files are uncompressed in memory.
> > >
> > > RStmp is independent of --whole-file.
> >
> > What about when there is no prior file? I have explictly deleted the
> > original file from the prior backup and I still get an RStmp file. Is
> > it just filled with zeros or something?
> >
> > I agree with Tony that the new file is created very slowly. I have
> > 14GB of the 38GB file transfrerred and the latest backup attempt has
> > been running since:
> >
> >    2009-02-12 19:58
> >
> > so that is very slow indeed.
> >
> > Is there some data I can get to try to figure out where the bottleneck
> > is? Since Tony said he sees the same issues using --whole-file I guess
> > that won't solve my problem either.
>
> I'm having the identical issue, moved from pure rsync which wasnt causing a
> problem before that I could see.
> Also with large database files (12GB).

If you use cmp -l between two copies of the database files (yesterdays
and todays for example), how may bytes do you have that are different?

  cmp -l yesterday.db today.db | wc -l

will give you the number of different bytes.

> Strangely enough, If I clear the backup manually and then set it to full

How do you manually clear the backup, delete the whole
/backuppc-root/pc/hotname/#### tree? Or do you just delete the backup
database file.

> it backs up in 47 minutes flat,
> otherwise it could sit for days. Not sure what I should try? I could pre
> script it to remove the files before every backup but I it would be a
> shortcut and not resolving the problem.

My backup finally finished after 4 days and 15 hours. Now I am
incrementally backing up the file without issue (it has about 100MB of
byte differences in steady state mode). My guess is the rsync perl
module is having issues with large changes in files. I never did try
the --whole-file mode.

My guess is something is wacky in the rsync perl module used by
BackupPC since like you I could rsync the file using rsync(1) in less
than two hours whether I was rsyncing to a non-existing file, or
rsyncing to an old copy of the database.

So I am afraid I don't have any good ideas here.

-- 
                                -- rouilj

John Rouillard
System Administrator
Renesys Corporation
603-244-9084 (cell)
603-643-9300 x 111

------------------------------------------------------------------------------
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

<Prev in Thread] Current Thread [Next in Thread>