BackupPC-users

Re: [BackupPC-users] rsync clients run out of memory

2009-11-20 13:26:19
Subject: Re: [BackupPC-users] rsync clients run out of memory
From: Andrew Libby <alibby AT xforty DOT com>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Fri, 20 Nov 2009 13:08:30 -0500
Hi Richard,


Richard Hansen wrote:
> Les Mikesell wrote:
>> Richard Hansen wrote:
>>> Apparently not -- both of my clients already have rsync 3.0.5 installed, 
>>> yet rsync is causing them to run out of memory.  The clients have 4 to 6 
>>> million (largely redundant) files each.  It appears that I need the 
>>> incremental-recursion feature of protocol 30 to back up this many files.
>> If they are grouped in several subdirectories, you could break the 
>> backups into separate runs.  If they are all in one directory, even 
>> protocol 30 probably won't help.
> 
> The files are in several subdirectories.  I was contemplating breaking 
> up the run, but there's a complication:  The set of subdirectories 
> (underneath the only directory where it makes sense to split up the run) 
> changes over time.  It's a slow change, so maybe I can keep a sharp eye 
> out and manually adjust RsyncShareName as subdirectories are added and 
> removed (blech).

I've encountered a similar situation, a mail server.  It
uses maildir, which puts emails one per file so there are
lots of files.   So we've got like 7 or so million files on
a host.

I took and bind mounted user home directories  under
/var/bind_mounts/a/[a_username] to distribute the
files across several top level folders.  In reality we use
something like

0-9, a-c, d-h, etc.

Then each of those /var/bind_mounts/[foldername] are setup
as volumes to be backed up on the client. We've excluded
/var/bind_mounts from being backed up on the / volume.

All of this is accomplished using a script that's added
before and after the rsync call in the RsyncClientCmd and
RsyncClientRestoreCmd to build up and tear down the mounts.

We did it this way so it'd be hands off, we don't need to
worry what happens when users are added or removed because
the bind mount is setup just before the backups are taken.

This make any sense?


> 
>> Using tar as the xfer method would avoid the issue with the tradeoff 
>> that you use more bandwidth for full runs and don't reflect changes 
>> quite as accurately in increments.

Ultimately, we decided against this for the reasons you
mention.

> 
> I've switched to tar for now, and I'm hoping that it will prove to be an 
> adequate solution.
> 
> Thanks for your help,
> Richard
> 

Best,

Andy

-- 

===============================================
xforty technologies
Andrew Libby
alibby AT xforty DOT com
http://xforty.com
===============================================


------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

<Prev in Thread] Current Thread [Next in Thread>
  • Re: [BackupPC-users] rsync clients run out of memory, Andrew Libby <=