BackupPC-users

Re: [BackupPC-users] Craig has posted design details of 4.x to the developers list

2011-03-04 08:55:31
Subject: Re: [BackupPC-users] Craig has posted design details of 4.x to the developers list
From: Adam Goryachev <mailinglists AT websitemanagers.com DOT au>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Sat, 05 Mar 2011 00:27:13 +1100
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 04/03/11 23:32, Pedro M. S. Oliveira wrote:
> I'm not sure about doing the 16TB (performance, backup duration) so I 
> thinking in some kind of block device backup.
> Idea:
> 1 - Create lvm snapshot of the block device
> 2 - Backup lvm snapshot (I could use DD, but then it would be a full backup 
> every time I do a backup), something like rsync where the only the changed 
> blocks of the block device.
> 
> Benefits:
> 1 - Performance, althoughtthe gains only show after 70% of full disk, 45%-50% 
> full disk for small files.
> 2 - Restore backup directly into volume.
> 3 - Possibility of mount  on a loop device.
> 
> Conns:
> The first backup should take ages, and initial FS should have zeros on it's 
> free space (so the initial backup can use the compression efficiently)
> This approach is only possible on unix/linux FS.
> The LVM methods for creating snapshots aren't standard and partitioning / 
> volume creation need to be addressed and thought before deployment (is this a 
> conn??)
> 
> The recover method should be able to restore the block device (in this case 
> an LVM volume).
> I can see lots of difficulties with this approach but the benefits can be 
> great too.
> What do you all think about this.

Been there, done that, and it works well already (sort of)...

I have a MS Windows VM which runs under Linux and it's 'hard drive' is
in effect a file. I used to:
1) use LVM to take a snapshot
2) copy the raw snapshot image to another location on the same disk
3) delete the snapshot
4) split the copy of the image to individual 20M chunks (split)
5) use backuppc/rsync to backup the chunks

The problem with this is the time for the backup to complete (about 5
hours for steps 1 - 3, and another 1 hour for step 4

Recently, I skipped step 1 and 3 and just shut down the machine before
taking the copy, this finishes in about 30 minutes now. In any case,
backuppc handles this quite well.

The reasons for splitting the image are:
1) backuppc can take better advantage of pooling since most chunks have
not changed (one big file means the entire file changes every backup).
2) backuppc doesn't seem to handle really large files with changes in
them (ie, performance wise it seems to slow things down a lot).

Hope this helps...

PS, I also use the same idea for certain niche applications that produce
a very large 'backup' file, I split it into chunks before letting
backuppc back it up. I also ensure they are de-compressed first, letting
rsync/ssh/backuppc handle the compression at the transport and file
system levels.

Regards,
Adam

- -- 
Adam Goryachev
Website Managers
www.websitemanagers.com.au
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk1w6K4ACgkQGyoxogrTyiV7jwCbBE09BlbFICBUrcBshJCeZ6GM
66gAnihCDVeOh97HPoDLHvELwvN+/yYk
=xYJK
-----END PGP SIGNATURE-----

------------------------------------------------------------------------------
What You Don't Know About Data Connectivity CAN Hurt You
This paper provides an overview of data connectivity, details
its effect on application quality, and explores various alternative
solutions. http://p.sf.net/sfu/progress-d2d
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

<Prev in Thread] Current Thread [Next in Thread>