BackupPC-users

Re: [BackupPC-users] New to town - where to begin...

2013-09-18 11:37:26
Subject: Re: [BackupPC-users] New to town - where to begin...
From: <afpteam AT sbcglobal DOT net>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Wed, 18 Sep 2013 11:35:15 -0400
----- Original Message ----- 
From: "Carl Wilhelm Soderstrom" <chrome AT real-time DOT com>
To: <backuppc-users AT lists.sourceforge DOT net>
Sent: Wednesday, September 18, 2013 9:28 AM
Subject: Re: [BackupPC-users] New to town - where to begin...


> On 09/18 09:00 , afpteam AT sbcglobal DOT net wrote:
>> I've started out leasing a backup machine from http://backupsy.com 
>> finding
>> them so far to be well resourced, well provisioned and responsive to 
>> support
>> questions, having a fair amount of experience and several data center
>> locations established across their entry to the industry in 2013.
>
> Just curious, is there a reason you bought a virtual machine at a remote
> location rather than experimenting on a local machine or virtual machine?
>
>> I assume BackupPC runs on the backup server itself? (unsure if client
>> servers require some part of it installed).
>
> BackupPC is a collection of perl scripts with a nice web interface front
> end, which store data on the machine BackupPC is installed on. (you could
> make it more complicated than that, but it's pretty advanced and there's 
> not
> much call for it).
>
> The clients are usually backed up with rsync over SSH, or rsyncd. There is
> also a method using tar (only good for local backups, or data mounted on a
> network filesystem of some sort), an FTP method (which I haven't used), 
> and
> a tar over SMB method (which is now broken in Samba-3.6.3 due to some 
> Samba
> changes). There was a project to make a BackupPC client software which 
> would
> run on the machine to be backed up, but I do not know the status of it.
>
>> Can BacupPC snapshot and export it's own server's image? (see current 
>> drive
>> config below).
>
> BackupPC doesn't really do 'images', it backs up individual files, which 
> you
> can assemble into a tarball when you want to restore.
>
> I have my BackupPC servers back up at least their own /etc/ directory. 
> FWIW,
> my best-practices advice is to put /var/lib/backuppc on its own 
> filesystem,
> so that if the '/' filesystem becomes corrupt or the disk dies or the 
> like,
> the backed-up data is still ok; and if the backup data becomes corrupt the
> OS is still there to try to recover it. Also, BackupPC can store some data
> in the compressed pool of files (/var/lib/backuppc/cpool) and some in the
> uncompressed pool (/var/lib/backuppc/pool). I make sure to back up the
> BackupPC server's own backup in the uncompressed pool, so that in case of
> disaster and only the simplest tools being available to get at your data
> (i.e. BackupPC itself not functioning) you can still read your data. 
> Here's
> my configuration for backing up the local machine.
>
> $ cat /etc/backuppc/localhost.pl
> #
> # Local server backup as user backuppc
> #
>
> # dunno why it needs to ping,
> # but after the upgrade to 3.2.1 this became necessary
> $Conf{PingCmd} = '/bin/true';
>
> $Conf{XferMethod} = 'tar';
>
> # let it back itself up anytime it wants to.
> $Conf{BlackoutPeriods} = [];
>
> $Conf{TarShareName} = ['/'];
>
> $Conf{BackupFilesExclude} = ['/proc', '/sys', '/var/lib/backuppc',
> '/var/lib/vmware', '/var/log', '/tmp', '/var/tmp', '/mnt', '/media'];
>
> $Conf{TarClientCmd} = '/usr/bin/env LC_ALL=C /usr/bin/sudo 
> $tarPath -c -v -f
> - -C $shareName --totals';
>
> # remove extra shell escapes ($fileList+ etc.) that are
> # needed for remote backups but may break local ones
> $Conf{TarFullArgs} = '$fileList';
> $Conf{TarIncrArgs} = '--newer=$incrDate $fileList';
>
> # turning off compression on these files, so they can be recovered without
> # backuppc.
> # wouldn't make sense to need your backup server,
> # in order to recover your backup server, now would it?
> $Conf{CompressLevel} = 0;
>
>
> -- 
> Carl Soderstrom
> Systems Administrator
> Real-Time Enterprises
> www.real-time.com
>

Hi Carl,

Thank you for your reply ...

I may be putting more stock than I should in BackupPC adopting a "method" or 
grossly misunderstanding feasibility completely, in order to avoid file 
based recovery as much as possible, hence my reference to saving and 
restoring an "image" of the OS snapshot I'm hoping a VM should lend itself 
to.


> BackupPC doesn't really do 'images', it backs up individual files, which 
> you
> can assemble into a tarball when you want to restore.

A snapshot file-set is still a "file based" functionality to BackupPC, I 
would hope.  It may be an LVM tarballed as a definition of "image" perhaps.
I may have abused the term "image", (not meaning to imply iso), but 
compressing an entire OS from two partitions into tar based, encrypted files 
transmitting off-machine is my meaning in this context, if that helps.  I 
may also be giving up incremental backups in favor of an expedient "total 
restore" concept.

> Just curious, is there a reason you bought a virtual machine at a remote
> location rather than experimenting on a local machine or virtual machine?

Yes, proximity and bandwidth resources mostly with sufficient redundancy to 
attain reliability.  I handle a number of global operations reliant on 
combinations of our own AND 3rd party VPS both, one of the reasons I need a 
restoration solution that works fairly full scope across many VM 
configurations.  We tend to "nest in" solutions at each data center we deal 
with globally, which are fairly small client machines in every instance, 
Windows and Linux both.

> BackupPC is a collection of perl scripts with a nice web interface front
> end, which store data on the machine BackupPC is installed on. (you could
> make it more complicated than that, but it's pretty advanced and there's 
> not
> much call for it).

I was hoping to springboard off of this script for VMs on ...
http://www.redhat.com/archives/virt-tools-list/2009-October/msg00069.html


> I have my BackupPC servers back up at least their own /etc/ directory. 
> FWIW,
> my best-practices advice is to put /var/lib/backuppc on its own 
> filesystem,
> so that if the '/' filesystem becomes corrupt or the disk dies or the 
> like,
> the backed-up data is still ok; and if the backup data becomes corrupt the
> OS is still there to try to recover it.

I would like to avoid depending on data stored on the same server for 
recovery, save that a "snapshot" process suggests to do this real time as it 
is, to an off-machine storage.  Again, this is "disaster" minded and 
disaster does not assume something simple to repair, but to quickly rebuild 
the entire package in as few steps as possible.  Ultimately the back-up(s) 
off-server need to be the gold standard.

I may need to also consider the backup server as the one "different" 
approach to the rest possibly, but having a "pair" of identical backup 
servers in separate locations lends itself to cross-distributed process for 
each backup server to backup the other, kind of thing.

Restoring an "OS only" at a VPS vendor is quick and easy.  Following this by 
over-writing the OS from snapshot, to restore integrated client specific 
application and data makes the most sense to me, if a VM disk storage can 
aptly represent a "stateful machine".  Many times, discerning the OS file 
sets from Application and again from data in Windows for example is 
impossible, so again a "snapshot" represents a container based recovery 
concept, if I'm not all wet in assuming this is attainable.  When addressing 
client user VM's you can never be sure what the client did to the OS or the 
data and in a disaster situation the VPS vendor is likely to be also facing 
severe issues at the time of a recovery.

Tarball by FTP was attractive for simplicity, compression and encryption 
options, having control of the ftp and related security at all points as we 
do.  Using Duplicity and / or Duplicati at the client was a consideration.

In Windows VM, this refers to the use of a compressed MS Shadow Copy shrunk 
down versus what I hope "snapshot" can do in Linux.

Being a novice admin I am, doing file level restoration seems fraught with 
issues of time stamps, mnt folders and a whole myriad of "how to capture and 
restore Linux", not to mention the "running" state of a client VM.  It 
looked to me like using a dynamic snapshot of the entire running VM, might 
permit a restorable "set" which could be sent back into the server, fsked 
and be right back up running.  I may be dissolutioned, over stating 
simplicity here, but hopefully you can see my desire for a "package" 
restoration, versus file sets.

We deal with multiple flavors of several 3rd party VPS dependencies.  So, 
the idea of having to restore 50 client VM's at one location remotely and 
patch each one differently back to a running state from file restores, 
LOOKED like I would be facing 3 hours or worse per VM in a disaster 
recovery.  I need something where at worst, the basic VM is rebuilt from 
template, then blow the last working snapshot back in place and go. 
Hopefully the need to re-cook the template isn't even an issue since an 
image restore should basically reconstruct the entire VM, providing the boot 
load and container are still intact.

As for a lighter set of "client-centric" data, this might represent a 
scripted secondary file approach in the event a client simply blew away his 
essential DB for example.  That I can handle as a file based process.  For 
disaster recovery I'm fearful to face file level situations for lack of my 
own competence if nothing else.

Again, please forgive if I'm over-simplifying an idealistic approach.  I 
know many well accomplished Admins may look at this as folly, but I liken it 
to "Restoration As A Service" (RAAS) kind of thing, trying to do so outside 
the VPS hypervisor context.  It may solve all of my woes at the price of 
larger backup copies less frequently, sacrificing incremental if necessary.

Is the approach even viable, perhaps if not with BackupPC, another tool set 
even?

Mike 


------------------------------------------------------------------------------
LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99!
1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint
2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes
Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. 
http://pubads.g.doubleclick.net/gampad/clk?id=58041151&iu=/4140/ostg.clktrk
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

<Prev in Thread] Current Thread [Next in Thread>