BackupPC-users

Re: [BackupPC-users] Advice on creating duplicate backup server

2008-12-08 08:34:01
Subject: Re: [BackupPC-users] Advice on creating duplicate backup server
From: Holger Parplies <wbppc AT parplies DOT de>
To: "Nils Breunese (Lemonbit)" <nils AT lemonbit DOT com>, Stuart Luscombe <s.luscombe AT drc.ion.ucl.ac DOT uk>
Date: Mon, 8 Dec 2008 14:32:11 +0100
Hi,

Nils Breunese (Lemonbit) wrote on 2008-12-08 12:23:40 +0100 [Re: 
[BackupPC-users] Advice on creating duplicate backup server]:
> Stuart Luscombe wrote:
> 
> > I?ve got the OS (CentOS) installed on the new server and have  
> > installed BackupPC v3.1.0, but I?m having problems working out how  
> > to sync the pool with the main backup server.

I can't help you with *keeping* the pools in sync (other than recommending to
run the backups from both servers, like Nils said), but I may be able to help
you with an initial copy - presuming 'dd' doesn't work, which would be the
preferred method. Can you mount either the old pool on the new machine or the
new pool on the old machine via NFS? Or even better, put both disk sets in one
machine for copying? You would need to shut down BackupPC for the duration of
the copy - is that feasible? 3TB means you're facing about 10 hours even with
'dd', fast hardware and no intervening network - anything more complicated
will obviously take longer. Your pool size is 3TB - how large is the file
system it is on? Is the destination device at least the same size?
How many files are there in your pool?

> > I managed to rsync the  
> > cpool folder without any real bother, but the pool folder is the  
> > problem,

Err, 'pool' or 'pc'? ;-)

> > and a cp ?a didn?t seem to work  
> > either, as the server filled up, assumedly as it?s not copying the  
> > hard links correctly?

That is an interesting observation. I was always wondering exactly in which
way cp would fail.

> > So my query here really is am I going the right way about this? If  
> > not, what?s the best method to take so that say once a day the  
> > duplicate server gets updated.

Well, Dan, zfs? ;-)
Presuming we can get an initial copy done (does anyone have any ideas on how
to *verify* a 3TB pool copy?), would migrating the BackupPC servers to an
Opensolaris kernel be an option, or is that too "experimental"?

> Check the archives for a *lot* of posts on this subject. The general  
> conclusion is that copying or rsyncing big pools just doesn't work  
> because of the large number of hardlinks used by BackupPC. Using rsync  
> 3.x instead of 2.x seems to need a lot less memory, but it just ends  
> at some point.

Because the basic problem for *any general purpose tool* remains: you need a
full inode number to file name mapping for *all files* (there are next to no
files with only one link in a pool FS), meaning *at least* something like 50
bytes per file, probably significantly more. You do the maths.

Apparently, cp simply ignores hardlinks once malloc() starts failing, but I'm
just guessing.

This doesn't mean it can't be done. It just means *general purpose tools* will
start to fail at some point.

> A lot of people run into this when they want to migrate  
> their pool to another machine or bigger hard drive. In that case the  
> usual advice is to use dd to copy the partition and then grow the  
> filesystem once it's copied over.

The only problem being that this limits you to the same FS with the same
parameters (meaning if you've set up an ext3 FS with too high or too low
inodes to block ratio, you can't fix it this way). And the fact remains that
copying huge amounts of data simply takes time.

> Instead of trying to sync the pool, can't you just run a second  
> BackupPC server that also backs up your machines?

If you don't need the current backup history on the redundant server, save
yourself the pain of the initial pool copy and just follow this path -
presuming network and client load constraints allow you to.

One other thing: is your pool size due to the amount of backed up data or due
to a long backup history? If you just want to ensure you have a recent version
of your data (but not the complete backup history) in the event of a
catastrophe, archives (rather than a copy of the complete pool) may be what
you're looking for.

Regards,
Holger

------------------------------------------------------------------------------
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/