BackupPC-users

Re: [BackupPC-users] BackupPC Pool synchronization?

2013-03-01 16:25:57
Subject: Re: [BackupPC-users] BackupPC Pool synchronization?
From: Lars Tobias Skjong-Børsting <lists AT snota DOT no>
To: backuppc-users AT lists.sourceforge DOT net
Date: Fri, 01 Mar 2013 10:17:56 +0100
Hi,

On 3/1/13 12:34 AM, Les Mikesell wrote:
> On Thu, Feb 28, 2013 at 3:10 PM, Mark Campbell <mcampbell AT emediatrade DOT 
> com> wrote:
>
>> So I'm trying to get a BackupPC pool synced on a daily basis from a 1TB MD
>> RAID1 array to an external Fireproof drive (with plans to also sync to a
>> remote server at our collo).
> 
> I'm not sure anyone has come up with a really good way to do this.
> One approach is to use a 3-member raid1 where you periodically remove
> a drive and resync a new one.   If you have reasonable remote
> bandwidth and enough of a backup window, it is much easier to just run
> another instance of backuppc hitting the same targets independently.

I have come up with a IMHO good way to do this using ZFS (ZFSonLinux).

Description:
* uses 3 disks.
* at all times, keep 1 mirrored disk in a fire safe.
* periodically swap the safe disk with mirror in server.

1. create a zpool with three mirrored members.
2. create a filesystem on it and mount at /var/lib/backuppc.
3. do some backups.
4. detach one disk and put in safe.
5. do more backups.
6. detach one disk and swap with the other disk in the safe.
7. attach and online the disk from the safe.
8. watch it sync up.

I am currently using 2TB disks, and swap period of 1 month. Because of
ZFS it doesn't need to sync all the blocks, but only the changed blocks
since 1 month ago. For example, with 10GB changed it will sync in less
than 25 minutes (approx. 7 MB/s speed). That's a lot faster than
anything I got with mdraid which syncs every block.

ZFS also comes with benefits of checksumming and error correction of
file content and file metadata. BackupPC also supports error correction
through par2, and this gives an extra layer of data protection.

Backing up large numbers of files can take a very long time because of
harddisk seeking. This can be alleviated by using a SSD cache drive for
ZFS. This support for read (ZFS L2ARC) and write (ZFS ZIL) caching on a
small SSD (30 GB) cuts incremental time down to half for some shares.

As for remote sync, you can use "zfs send" on the backup server and "zfs
receive" on the offsite server. This will only send the differences
since last sync (like rsync), and will be probably be significantly
faster than rsync that in addition has to resolve all the hardlinks.

-- 
Best regards,
Lars Tobias

------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/