On Sun, May 31, 2009 at 11:22:13AM -0400, Stephane Rouleau wrote:
> Pieter Wuille wrote:
> >
> > This is how we handle backups of the backuppc pool:
> > * the pool itself is on a LUKS-encrypted XFS filesystem, on a LVM volume,
> > on a
> > software RAID1 of 2 1TB disks.
> > * twice a week following procedure in run:
> > * Freeze the XFS filesystem, sync, lvm-snapshot the encrypted volume
> > * Unfreeze
> > * send the snapshot over ssh to an offsite server (which thus only ever
> > sees
> > the encrypted data)
> > * remove the snapshot
> > * The offsite server has 2 smaller disks (not in RAID), and snapshots are
> > sent
> > in turn to one and to the other. This means we still have a complete pool
> > if
> > something would goes wrong during the transfer (which takes +- a day)
> > * The consistency of the offsite backups can be verified by exporting them
> > over NBD (network block device), and mounting them on the
> > normal backup server (which has the encryption keys)
> >
> > We use a blockdevice-based solution instead of a filesystem-based one,
> > because
> > the many small files (16 million inodes and growing) makes those very disk-
> > and cpu intensive. (simply doing a "find | wc -l" in the root takes hours).
> > Furthermore it makes encryption easier.
> > We are also working on a rsync-like system for block devices (yet that might
> > still take some time...), which would bring the time for synchronising the
> > backup server with the offsite one down to 1-2 hours.
> >
> > Greetz,
> >
>
> Pieter,
>
> This sounds rather close to what I'd like to have over the coming months. I
> just recently reset our backup pool, and rather stupidly did not select an
> encrypted filesystem (Otherwise we're on XFS, LVM, RAID1 2x1.5TB). Figured
> I'd encrypt the offsite only, but I see now that it'd be much better to send
> data at the block level.
>
> You mention the capacity of your pool file system, but how much space is
> typically used on it? Curious also what kind of connection speed you have
> with your offsite backup solution.
Some numbers:
* backup server has 1TB of RAID1 storage
* contains amonst others a 400GiB XFS volume for backuppc
* daily/weekly backups of +- 195GiB of data
* contains 256GiB of backups (expected to increase significantly still)
* contains 16.8 million inodes
* according to LVM snapshot usage, avg. 1.5 GiB of data blocks change on
this volume daily
* offsite backup server has 2x 500GB of non-RAID storage
* twice a week, the whole 400GiB volume is sent over a 100Mbps connection (at
+- 8.1MiB/s)
* that's a huge waste for maybe 5GiB of changed data, but the bandwidth is
generously provided by the university
* we hope to have a more efficient blockdevice-level synchronisation system
in a few months
PS: sorry for the strange subject earlier - i used a wrong 'from' address first
and forwared it
--
Pieter
------------------------------------------------------------------------------
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT
is a gathering of tech-side developers & brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing, &
iPhoneDevCamp as they present alongside digital heavyweights like Barbarian
Group, R/GA, & Big Spaceship. http://p.sf.net/sfu/creativitycat-com
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
|