BackupPC-users

Re: [BackupPC-users] Q: Occasional 'backup' of backupPC to offsite ..

2009-02-18 22:09:38
Subject: Re: [BackupPC-users] Q: Occasional 'backup' of backupPC to offsite ..
From: Adam Goryachev <mailinglists AT websitemanagers.com DOT au>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Thu, 19 Feb 2009 14:07:17 +1100
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Tim Chipman wrote:
> Hi,
> 
> I was reviewing the recent thread on the topic of "how to make a
> backup copy of my backupPC server" - ie - for offsite backup
> redundancy.. and I wanted to see if this is done / doable, given
> certain circumstances,
> 
> - second site is 100Mb connection (can get ~10-12megs per second
> through the network)
> - replicate copy pushed not too frequently (maybe weekly or monthly)
> - storage pool size likely to start at ~ hundred gigs and creep up
> towards a few Tb over time
> - target storage is accessible via rsync, SSH, or iSCSI even.
> 
> I wonder, for example, about simply creating a cron job that
> 
> - mounts the remote iSCSI volume
> - stops BackupPC from running
> - uses either cp with -a flag; rsync with -H flag (has memory issues
> possibly when large data set is being moved?); or possibly better, the
> script apparently bundled with BackupPC 3.0 and later,
> "BackupPC_tarPCCopy " (?) to bring a copy of data from the source
> filesystem to the destination .. ?
> - once completed, unmount the remote iSCSI filesystem  and then bring
> backupPC back online.
> 
> Clearly, this has undesirables; such as, copying stuff verbatim rather
> than doing incremental / differential only transfers  if using the
> tar-based script (?); having to delete the 'offsite copy' prior to
> putting a new one there, assuming the remote disk isn't big enough to
> have both the old and the new offsite copies exist at the same time...
> 
> Any thoughts or comments .. certainly are appreciated.. and I'm sorry
> if I'm revisiting a old tired familiar query here .. I had trouble
> getting any hits past the most recent ones on this topic in the forum.

I would suggest using rsync and ensure you have the latest rsync v3 on
both sides. This should solve the memory problems, and provide the
incremental advantages. The only problem I could foresee is that the
time it takes will increase as your number of backups increases, which
may end up being too long....

The other option I would try, is a enbd block device (the remote end)
being in a raid1 with the local HDD (raid device). So you have md0 which
is made up of your 5 local HDD, and then md1 is a raid1 of md0 and the
enbd device. Then use mdadm to set the enbd as a write-only member.

Please post back to advise how it works out...

Regards,
Adam

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkmczOUACgkQGyoxogrTyiViigCgimBcmcaRcQGlP2YDDUBPo/+7
ZJkAoIwhD05mfzhnZxU1cHy14EvOniSs
=8H0Y
-----END PGP SIGNATURE-----

------------------------------------------------------------------------------
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

<Prev in Thread] Current Thread [Next in Thread>