BackupPC-users

Re: [BackupPC-users] Using rsync for blockdevice-level synchronisation of BackupPC pools

2009-09-02 14:12:05
Subject: Re: [BackupPC-users] Using rsync for blockdevice-level synchronisation of BackupPC pools
From: Les Mikesell <lesmikesell AT gmail DOT com>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Wed, 02 Sep 2009 13:08:27 -0500
Pieter Wuille wrote:
> >> The one thing that would bother me about this approach is that you would 
>> have a fairly long window of time while the remote filesystem chunks are 
>> being updated.  While rsync normally creates a copy of an individual 
>> file and does not delete the original until the copy is complete, a 
>> mis-matched set of filesystem chunks would likely not be usable.  Since 
>> disasters always happen at the worst possible time, I'd want to be sure 
>> you could recover from losing the primary filesystem (site?) in the 
>> middle of a remote copy.  This might be done by keeping a 2nd copy of 
>> the files at the remote location, keeping them on an LVM with a snapshot 
>> taken before each update, or perhaps catting them together onto a 
>> removable device for fast access after the chunks update.
> 
> You're very right, and i thought about it too. Instead of using a RAID1 on
> the offsite backup, there are two separate backups on the offsite machine,
> and synchronisation switches between them. This also enables the use of
> rsync's --inplace option.

That should be safe enough, but doesn't that mean you xfer each set of 
changes twice since the alternate would be older?

> Keeping an LVM snapshot is a possibility, but it becomes somewhat complex to
> manage: you get a snapshot of a volume containing a filesystem whose files
> correspond to parts of a snapshot of a volume containing an (encrypted)
> filesystem containing a directory that corresponds to a pool of backups...

The snapshot would just contain be the same files you had before the 
last xfer started.   But, you'd still need space to hold the large file 
changes.

> Catting the part files together to a device after transmission isn't a
> complete solution: what if the machine crashes during the catting...?

The machine crash would have to destroy the filesystem containing the 
chunks to be a real problem.  And then I wouldn't expect both your 
primary server and the server holding the file chunks to die at the same 
time, but it would mean you'd have to xfer the whole mess again. 
Perhaps you could alternate the catting to 2 different devices so you'd 
always have one ready to whisk off to the restore location.

-- 
   Les Mikesell
    lesmikesell AT gmail DOT com



------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

<Prev in Thread] Current Thread [Next in Thread>