BackupPC-users

Re: [BackupPC-users] Backup of VM images

2011-06-09 23:51:39
Subject: Re: [BackupPC-users] Backup of VM images
From: "Jeffrey J. Kosowsky" <backuppc AT kosowsky DOT org>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Thu, 09 Jun 2011 23:49:55 -0400
Jim Wilcoxson wrote at about 14:10:31 +0000 on Thursday, June 9, 2011:
 > Boniforti Flavio <flavio <at> piramide.ch> writes:
 > 
 > > 
 > > Hello to both of you, Adam and Andrew.
 > > 
 > > > Great suggestion, backing up the VM's as if they were normal 
 > > > clients...
 > > 
 > > That's an option I can't afford to implement. I've been asked
 > > *explicitly" to backup the images itselves!
 > > 
 > ...
 > > 
 > > Indeed, the splitting would be OK, but still: I'm in need of backing up
 > > a *big file* which may change in some bytes...
 > 
 > Hi Flavio - I'm developing a "push" backup program, HashBackup, that will 
 > backup
 > a VM image at around 20MB/sec for changed data and 40-50MB/sec for unchanged
 > data using dedup.  This is on a Macbook 60MB/sec hard drive.  You could 
 > attach a
 > USB disk, backup to that, and also send the incrementals offsite.  
 > Incrementals
 > will be minimized to the actual disk blocks that changed.
 > 
 > Rsync usually doesn't work that well with VM images because by default it 
 > uses a
 > block size of sqrt(filesize).  For large VM images, the block size becomes 
 > very
 > large.  VM images often have lots of scattered small changes, defeating 
 > rsync's
 > delta algorithm.

Just as an FYI, BackupPC uses a more limited blocksize range that does
not get that huge. In fact, the block size ranges from 2048 to 16384
with the values within the range set by int(file_size/10000).


 > In contrast, HashBackup uses 4K blocks for VM images, which minimizes the 
 > size
 > of the incremental.  You could also do this with rsync by specifying the 
 > block
 > size, and I think BackupPC may force a block size of 2K (sometimes?), but 
 > rsync
 > doesn't have efficient data structures for handling this with huge VM image 
 > and
 > just goes CPU bound.
 > 
 > If you want to try it, the beta site is http://www.hashbackup.com
 > 
 > Basically, you would do:
 > 
 > $ hb init -c /mnt/usbdrive/vm1
 > $ hb backup -c /mnt/usbdrive/vm1 -D1g ~/Documents/VMImages/vm1
 > 
 > Using 1gb of RAM, HashBackup can dedup a 128GB VM image.
 > 
 > Jim
 > 
 > 
 > ------------------------------------------------------------------------------
 > EditLive Enterprise is the world's most technically advanced content
 > authoring tool. Experience the power of Track Changes, Inline Image
 > Editing and ensure content is compliant with Accessibility Checking.
 > http://p.sf.net/sfu/ephox-dev2dev
 > _______________________________________________
 > BackupPC-users mailing list
 > BackupPC-users AT lists.sourceforge DOT net
 > List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:    http://backuppc.wiki.sourceforge.net
 > Project: http://backuppc.sourceforge.net/

------------------------------------------------------------------------------
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

<Prev in Thread] Current Thread [Next in Thread>