Amanda-Users

Re: Super Slow full dump.

2004-01-30 20:49:12
Subject: Re: Super Slow full dump.
From: Jon LaBadie <jon AT jgcomp DOT com>
To: amanda-users AT amanda DOT org
Date: Fri, 30 Jan 2004 20:46:07 -0500
On Fri, Jan 30, 2004 at 01:37:28PM -0800, Tavis Gustafson wrote:
> I just ran a test dump of a Netapp F760 Filer.  The volume is 200GB total
> broken down into "dataglobs".  Each dataglob has its own disklist entry
> about (50 or so).
> The backup runs over nfs via tar.  Amanda has a 200GB IDE spool disk.
> The drive is a Sony AIX-700C (40GB/hr) inside a automatic tape changer.
> 
> This backup took 50 hours to complete.  Anyone know of any obvious things
> I can do to speed this process up?
> 
> Thank, TAvis
> 
> --Sample of disklist :
> 
> localhost /mnt/netapp/squishy/postal/acetate always-full
> localhost /mnt/netapp/squishy/postal/cadelle always-full
> localhost /mnt/netapp/squishy/postal/cadence always-full
> 

As you are always doing full dumps, you may be able to simplify
the planning phase.  Amanda likes to know how big a level 0 and
a level 1 and a ... are before deciding what to do.  To determine
this it does tar's to /dev/null.  Even on always-full it probably
does at least a level 0 tar to /dev/null to determine the size.
Check the archives for discussions of ways to simplify sendsize
or calcsize.

Try to reduce the number of dumps running concurrently.  If you
have 10 DLE's all working on a single disk drive, or all using
the available network bandwidth, you can have performance reducing
disk thrashing or network collisions.  One of my physical drives
has 6 DLE's  I one time checked the time to dump it sequentially
versus all at the same time.  It was significantly slower to do
all of them at the same time.  Spindle numbers in my disklist
helped in my case.  It was a local drive, but nfs drives may
benefit also.

-- 
Jon H. LaBadie                  jon AT jgcomp DOT com
 JG Computing
 4455 Province Line Road        (609) 252-0159
 Princeton, NJ  08540-4322      (609) 683-7220 (fax)

<Prev in Thread] Current Thread [Next in Thread>