Amanda-Users

Re: 77 hour backup of 850gb?

2006-06-29 21:33:28
Subject: Re: 77 hour backup of 850gb?
From: Jon LaBadie <jon AT jgcomp DOT com>
To: amanda-users AT amanda DOT org
Date: Thu, 29 Jun 2006 20:24:50 -0400
On Thu, Jun 29, 2006 at 06:48:33PM -0400, Paul Graf wrote:
> I've got a strange problem with my backups.  I'm running Amanda 2.5 on FC4, 
> and the problem lies with one large directory that needs to be backed up.  
> It's over 700 gigs, so it has to span tapes.  Eventually the backup will 
> complete, but I'm getting an average of 3 MB/s.  However, if I back up 
> something smaller (I have a few directories being backed up that are around 
> 100 megs), I get 25 MB/s.  I also did some test tar backups of a few gigs, 
> and those went at around 25 MB/s as well.
> 

As Frank already noted, you are taping direct to tape,
not using your defined holding disk.

> 
> holdingdisk hd1 {
>    comment "main holding disk"
>    directory "/tmp/amandahld"   # where the holding disk is
>    use 20000 Mb           # how much space can we use on it
>    chunksize 1Gb          # size of chunk if you want big dump to be
> }
> 

Is your /tmp directory (file system) large enough to support a
20GB holding disk and all the other uses you might have for it?

> # dumptypes
> 
> define dumptype global {
>    comment "Global definitions"
>    tape_splitsize 20000 mbytes
>    split_diskbuffer "/tmp/amandasplit/split"
>    index yes
>    holdingdisk yes
> }

You really do like /tmp don't you?

If I understand tape spanning correctly, and
if we are going direct to tape, which we are, you will need
at least 2 x 20GB for the split buffers.  One being filled
and another dumping to tape.

Now do you have over 40GB on your /tmp plus if any other
DLEs are dumping in parallel to the holding disk (also on
/tmp) maybe as much as another 20GB.


-- 
Jon H. LaBadie                  jon AT jgcomp DOT com
 JG Computing
 4455 Province Line Road        (609) 252-0159
 Princeton, NJ  08540-4322      (609) 683-7220 (fax)

<Prev in Thread] Current Thread [Next in Thread>