BackupPC-users

Re: [BackupPC-users] Migration/merge BPC hosts questions

2011-08-11 10:43:42
Subject: Re: [BackupPC-users] Migration/merge BPC hosts questions
From: "Jeffrey J. Kosowsky" <backuppc AT kosowsky DOT org>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Thu, 11 Aug 2011 10:41:41 -0400
ft oppi wrote at about 13:05:07 +0200 on Thursday, August 11, 2011:
 > Hello list,
 > 
 > I've read the wiki and part of the list, but the solutions described there
 > don't satisfy me completely, so I'm looking for something else.
 > 
 > I have two "old" Linux servers running BackupPC 3.1.0 and I need to
 > migrate/merge them to a single new server.
 > Compression is enabled on both BPC. Cpools sizes are 500Gbytes and
 > 1200Gbytes for a hundred backuped hosts.
 > FullKeepCnt is set to 8,0,12 (56 weeks) so I can't just put the new server
 > and keep the others around for that time, I need to migrate everything to
 > get rid of the old ones.
 > I don't need to migrate all at once but I can't skip a day of backup
 > (customer demands).
 > 
 > The new host would be relying on ZFS with compression, deduplication and
 > remote replication.
 > 
 > Currently I plan to do this:
 > 1) fresh install of BackupPC + mimic config (ssh keys, schedules, etc) on
 > new server,
 > 2) for each backuped host,
 >   a) disable backup of host (BackupsDisable = 2),
 >   b) plain tar pc/<hostname> from old to new server,
 >   c) copy <hostname>.pl from old to new server,
 >   d) add <hostname> to hosts file of new server,
 > 3) shutdown old server when all hosts have been migrated.
 > 
 > It's basically the same process described in the wiki without:
 > 1) pre-copying the pool (it would take ages, one of the server only has a
 > 100Mbps internet connection)

I'm confused, copying the pool is too slow yet you plan to "plain tar"
the pc directory which may be 10's to 100's of times larger (do to
pool deduplication)?

 > 2) using BackupPC_tarPCCopy (it does hard links against the pool, which
 > wouldn't exist)
 > 
 > My reasoning about this is I don't need hard links nor the pool anymore,
 > thanks to ZFS deduplication.

That sounds right theoretically, though you would need to test it in
practice. 

 > 
 > What would happen if I did that ?
 > Would BackupPC regenerate the pool over time (with new backups
 > coming in) ?

A *new* pool would be generated based upon any new backups. Old files
that exist only in the pc directory would obviously not be added to
the pool. The backups copied over from the old pc directory would
remain *unlinked*.

You could run a script to crawl through your pc directory to relink
the old backups. 

 > Would I still be able to restore files, browse backups, etc ?
Yes.
 > And finally, what would happen if I disabled compression on the new server ?
 > I remember reading it would only affect new backups, and I would still be
 > able to access old ones.
True.

------------------------------------------------------------------------------
Get a FREE DOWNLOAD! and learn more about uberSVN rich system, 
user administration capabilities and model configuration. Take 
the hassle out of deploying and managing Subversion and the 
tools developers use with it. 
http://p.sf.net/sfu/wandisco-dev2dev
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/