BackupPC-users

Re: [BackupPC-users] Copying BackupPC to tape for off-site storage - very slow

2010-03-02 13:27:40
Subject: Re: [BackupPC-users] Copying BackupPC to tape for off-site storage - very slow
From: Josh Malone <jmalone AT nrao DOT edu>
To: "General list for user discussion, questions and support" <backuppc-users AT lists.sourceforge DOT net>
Date: Tue, 02 Mar 2010 12:59:17 -0500
On Tue, 2 Mar 2010 10:53:15 -0600, Sean Carolan <scarolan AT gmail DOT com>
wrote:
> Hello BackupPC users:
> 
> We have a BackupPC system that has been working well for us for the
> past three years.  There is about 1.2 terabytes of data on our
> BackupPC partition and we'd like to be able to spool it off to tape
> for off-site storage.  We have an HP d2d device that gets about 50-60
> MB/s throughput during testing.  When I try to back up our backuppc
> partition however, I only get around 25MB per *minute*.  At this rate
> it will take days to back up the entire partition.  I'm using bacula
> to manage the tape backups.
> 
> Would this go faster if I unmounted the partition and tried to do a
> block-level copy of the entire thing?  How would you handle this?

Just a user in a similar situation chiming in :)

Wanting to do the same thing as you (but with a smaller pool), I started
using 'dump' to dump the pool disk to first tape, then disk, and found dump
consumed an awful lot of memory due to the number of links it had to deal
with. I switched to use using GNU tar and things were quite manageable. I'm
not familiar with your HP device but I just tar'red off to an external AIT
drive at first and changed tapes manually when needed. The dump took a
while (AIT ain't fast) but I wasn't worried.

However, now I've switched to just making "aux copies" using the 'archive'
host type built into backuppc. The disadvantage is that I don't get the
_entire_ backup history off site, just the latest synthetic-full. However,
the advantages are numerous:

  - takes far less time (would take _even_ less if I turned off
compression)
  - it's triggered from the web CGI so I can hand off this task to an
"operator"
  - the aux-copies are just tarballs so can do a bare-metal restore if
needed
     (i.e., you don't need BackupPC or it's utils to read them)

I'm now just using external firewire hard drives since I'm only writing
about 60GB instead of the full ~380G in my pool. I make an aux-copy every 2
weeks and take it off site. I can hold about 4 to 5 months of aux-copies on
the 4 drives I have available.

In short, unless you *need* to preserve the entire backup history in the
event of a full-site catastrophic failure, I'd just use archive hosts to
create an aux copy.

-Josh

-- 
--------------------------------------------------------
       Joshua Malone       Systems Administrator
     (jmalone AT nrao DOT edu)    NRAO Charlottesville
        434-296-0263         www.cv.nrao.edu
        434-249-5699 (mobile)
BOFH excuse #202:

kernel panic: write-only-memory (/dev/wom0) capacity 
exceeded.
--------------------------------------------------------

------------------------------------------------------------------------------
Download Intel&#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/