BackupPC-users

Re: [BackupPC-users] gluster

2008-07-22 17:14:08
Subject: Re: [BackupPC-users] gluster
From: dan <dandenson AT gmail DOT com>
To: "Les Mikesell" <les AT futuresource DOT com>
Date: Tue, 22 Jul 2008 15:14:01 -0600
The thing is that it isn't possible to have a perfectly synced remote copy of a filesystem at every single moment unless they are tied together in something like raid1 and all writes are syncronos, which would be murder on performance.  Otherwise, there will always be a delay for the remote sync.  at least with a clustered filesystem, the remote write is immediately queued up so the maximum delay between local or remote syncing is dicated by the bandwidth between them and not by a set time and a cron script.

Alternatively, you could try to raid1 over a remote iSCSI and rely on the device mapper to handle the syncing on the block level.  You could also use a compressed ssh tunnel to pipe the iSCSI packets between sites but you should probably put that tunnel in xinet so it is initiated each time it is needed or reinitiated when it is dropped.

The issue with that is that a de-sync of the mirror would cause a rebuild of the mirror set, which would probably sync the entire drive which would take ages!

a real cluster filesystem would avoid this but I/O performance on cluster filesystems are likely to exclude them from the backuppc compatable list.

 

On Fri, Jul 11, 2008 at 9:29 AM, Les Mikesell <les AT futuresource DOT com> wrote:
dan wrote:
if its the server that the write was going to, its lost.  if it is the remote server, then it will resync when it is back online.

Being able to recover from a complete loss of the local server at any point in time (building disaster, software/operator error, etc.) is pretty much the point of having the offsite copy.  If a crash mid-sync leaves it unusable then you'd probably want a way to cycle between two remote copies.

Maybe we are making this too complicated.  Has anyone tried unmounting the archive partition, using something like partimage http://www.partimage.org/Main_Page to copy to a local file, then rsync'ing a copy offsite?  Partimage knows enough about filesystems to only copy the used portions and is relatively fast.  If you absolutely had to stay online during this operation you might be able to use LVM snapshots or temporarily break a raid1 set for the time it takes for this copy.   It would take a substantial amount of spare disk space to make this work, though, and I don't know how well rsync would handle finding the unmodified portions in the copy made by partimage (you obviously wouldn't want to compress this).  Rsync normally builds a duplicate file during its run and won't replace the original until it is finished.  That means you'd need space for two copies at the remote site but you avoid the window where your only good copy can be mangled.


--
 Les Mikesell
  lesmikesell AT gmail DOT com

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
<Prev in Thread] Current Thread [Next in Thread>