BackupPC-users

Re: [BackupPC-users] Newbie setup questions

2011-03-10 21:38:13
Subject: Re: [BackupPC-users] Newbie setup questions
From: hansbkk AT gmail DOT com
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Fri, 11 Mar 2011 09:35:49 +0700
On Fri, Mar 11, 2011 at 3:46 AM, Michael Conner <mdc1952 AT gmail DOT com> 
wrote:
> That is good to know. Actually things are a little better than I thought, the 
> spare machine is Dell Dimension 2400 with a  Pentium 4, max 2 gb memory. So I 
> guess I could slap a new bigger drive into it and use it. My basic plan is to 
> get backups going to one machine and then dupe those to an NAS elsewhere in 
> the building. While we have a small staff, our building is 62,000 sq ft with 
> three floors, so I can get them physically separated even if not really off 
> site. For the web server, we have a two drive raid set up with two spare 
> drive bays. Besides backing up with BPC, I would also dupe the drive on a 
> schedule and take off site.


To expand on Jeffrey's comment below - the idea of "duping" your
backups is fraught with issues when the BPC filesystem gets past a
certain size.

To handle the creation of a redundant backup, I would advise one of
the following:

A - Periodically use BPC to run a full backup set to a different
target filesystem - this is simplest and quite likely the fastest, and
only becomes an issue if you have a limited time window - in which
case LVM snapshotting can help as Jeffrey mentioned.

B - use a block-level cloning process (like DD or its derivatives, or
Ghost-like COTS programs if that's more comfortable for you, to do
partition copying to a removable drive. Some use temporary RAID1
mirrors, but I don't recommend it.

C - a script included with BPC called BackupPC_tarPCCopy, designed to
do exactly this process.

Where you run into problems is trying to copy the hardlinked BPC
filesystem over at the **file level** - even rsync will choke when
you've got millions and millions of hardlinks to the same inodes to
keep track of.

BTW even if you don't do snapshots, you should use LVM from the
beginning as the basis for your new BPC target filesystem, gives you
future flexibility to avoid having to do the above any more than
necessary.

Hope this helps. . .

On Fri, Mar 11, 2011 at 5:04 AM, Jeffrey J. Kosowsky
<backuppc AT kosowsky DOT org> wrote:
> Keep in mind the point that Les made regarding backing up BackupPC
> archives. Due to the hard link structure, the fastest way to back up
> any reasonably large backup is at the partition level. This also makes
> it hard to enlarge your archive space should you outgrow your
> disk. One good solution is to use lvm since you can
> enlarge/expand/move partitions across multiple disks. You can also use
> lvm to create partition snapshots that can then be replicated as backups.

------------------------------------------------------------------------------
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/