BackupPC-users

Re: [BackupPC-users] Yet another offsite backup question. how to do the restore?

2010-01-06 18:38:12
Subject: Re: [BackupPC-users] Yet another offsite backup question. how to do the restore?
From: Adam Goryachev <mailinglists AT websitemanagers.com DOT au>
To: backuppc-users AT lists.sourceforge DOT net
Date: Thu, 07 Jan 2010 10:36:03 +1100
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

dstahl wrote:
> My apologies if this has been answered before. I tried searching
> around the forums and could not find a straightforward answer to this
> question. Our set up is one backuppc server on the lan backing up
> multiple servers, we are then off-siting all the data to another
> server via rsync. This is working fine. However when I got to the
> point of testing our restores from the off-site location, that is
> where things get a bit confusing. 

Only because you haven't identified your requirements precisely
enough... perhaps? Let's see...

> First I tried installing backuppc
> on the offsite server and doing a restore from the web-gui. big
> mistake. not only did it erase some of the full and incremental
> backups because I forgot to set the retention periods as the same on
> both servers,

You mean your backup server wasn't backing up your config from the live
server? This is something to learn from, ie, this is WHY we TEST things.
So we can find out we forgot to sync the config before we really need to
do a remote restore...

> but also entirely screws up rsync script, since the
> offsite server's backuppc install modified files in the data
> directories,

I would think the *install* of backuppc doesn't modify any content of
the data directories. However, running the newly installed backuppc
would eventually run the nightly script, which would expire (delete) a
heap of old backups (because your config was not what you wanted). Once
those backups are expired, the nightly script would then prune the
pool/cpool and remove all the files that are no longer needed by your
backups, in the process renaming some pool files where files with
colisions have been removed etc...

> therefore causing almost a full rsync each time instead
> of just new and modified rsyncs. 

So, if you properly configured your backup backuppc, then this issue
shouldn't occur.

The second problem with what you are testing, is you forgot to explain
under what scenario you would need to restore files from your remote
backuppc server, and yet still have all your data stored on your local
backuppc server to rsync nightly? ie, if your local backuppc server is
OK, why would you need/want to restore from your remote backuppc?

> Next I tried the command-line
> restore. putting this all in a tar seems crazy since it is almost 3TB
> of data. That would mean I would need 6TB of space, which I don't
> have.

Why do you need 6TB of space? Do you mean you need 3TB to store your
backups, and another 3TB for a single copy of all your data (the restore
tar)? Or is there a temp file created somewhere that needs 3TB?

> I tried piping the command like so: BackupPC_tarCreate -h
> myserver -n -1 -s '/server/shares' / > pm.tar However it gives me out
> of memory errors. (this was done on a lab server. not sure if real
> server with more memory will give same error)
> 
> So here is my condensed question. Given my scenario, Backuppc to
> offsite server via rsync. What is the best way to do a full (and
> partial) restore from the offsite server?

Better to define your requirements first...
I would suggest a number of options:
1) Attach an external (esata) drive to your remote backuppc server, do a
restore (from the gui, or cmdline) with the destination set to the
external drive, then take the drive to your LAN (or recovery location)
and use as needed.

2) Use rsync in reverse to copy the remote backuppc pool back to your
primary backuppc server (assuming there was some minor issue with it as
opposed to total data loss).

3) Use dd to copy your remote backuppc to your primary backuppc either
with a esata drive or over the net (with nc or ssh etc)

These are a couple of options, but in my limited experience, you would
only be looking to your remote backuppc system:
1) Your local backuppc system has died (stolen, RAID issue, multiple HDD
failure, etc)
2) Your local site has been destroyed (fire, theft, natural disaster, etc)

There is perhaps one scenario you are not protecting against, which is
sabotage/hacking/etc. ie, an Internal or external agent manages to rm
- -rf your local backup machine and/or one or more servers on the LAN.
Will your remote rsync replicate this data destruction. leaving you with
no backups? ie, the malicious person does the deed at 2am, and your
rsync runs at 3am... will someone notice in time to stop the rsync and
save your final copy of the data?

I'm not trying to be negative, or suggest that you haven't thought about
any of the above, you haven't provided enough information to know that.
However, hopefully, it will provide some information that you or someone
else has not thought about/considered...

Regards,
Adam
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAktFHmMACgkQGyoxogrTyiVleACeMVxvOXYbUI0Kl7eXv7SQPRn6
5L0AoMwfxIq2qo6fQ+E2lPFgjLEfB8FN
=lWHA
-----END PGP SIGNATURE-----

------------------------------------------------------------------------------
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/