Amanda-Users

Re: performance of backup process

2008-11-17 03:54:13
Subject: Re: performance of backup process
From: Paul Bijnens <Paul.Bijnens AT xplanation DOT com>
To: amanda List <amanda-users AT amanda DOT org>
Date: Mon, 17 Nov 2008 09:50:03 +0100
On 2008-11-17 09:02, Mister Olli wrote:
hi list,

I'm running amanda for some time now, and it really rocks. It's a
totally stable and easy to use backup solution with high reliability.
great work..

since my data volume is increasing, I'm wondering how to speed up the
backup process.
My network includes about 20 clients (mostly *NIX) machines, and one
amanda server which tapes to hard-drives.
I'm wondering how to tune amanda in this scenario? are there any general
hints and tips (beside tuning HD perfomance ;-))

The best program to spot bottlenecks and have an overview for
the performance of the whole backup process is "amplot".

It takes some time to get it running, and study the output, but
it's really worthwhile.




right now I'm looking at the 'amstatus' of my daily backup job, which
gives me the following output
====================================================
Using /var/log/amanda/daily/amdump from Mon Nov 17 08:31:51 CET 2008

172.31.2.10:/daten/business 0    11085m dumping     4380m ( 39.52%)
(8:38:25)
172.31.2.10:/daten/company  1       16m finished (8:38:25)
172.31.2.10:/daten/intern   0    24001m wait for dumping
172.31.2.10:/daten/software 1   242107m wait for dumping
172.31.6.10:/daten/blub     0     1623m finished (8:40:33)
172.31.6.10:/daten/business 1        0m finished (8:37:14)
172.31.6.10:/daten/doku     0     5413m finished (8:45:19)

SUMMARY          part      real  estimated
                           size       size
partition       :   7
estimated       :   7               284248m
flush           :   0         0m
failed          :   0                    0m           (  0.00%)
wait for dumping:   2               266108m           ( 93.62%)
dumping to tape :   0                    0m           (  0.00%)
dumping         :   1      4380m     11085m ( 39.52%) (  1.54%)
dumped          :   4      7054m      7054m (100.00%) (  2.48%)
wait for writing:   0         0m         0m (  0.00%) (  0.00%)
wait to flush   :   0         0m         0m (100.00%) (  0.00%)
writing to tape :   0         0m         0m (  0.00%) (  0.00%)
failed to tape  :   0         0m         0m (  0.00%) (  0.00%)
taped           :   4      7054m      7054m (100.00%) (  2.48%)
  tape 1        :   4      7054m      7054m (  3.44%) ERNW-daily02
9 dumpers idle  : client-constrained
taper writing, tapeq: 0
network free kps:    248976
holding space   :    408372m ( 96.12%)
chunker0 busy   :  0:00:00  (  0.00%)
chunker1 busy   :  0:00:00  (  0.00%)
 dumper0 busy   :  0:05:25  ( 40.25%)
 dumper1 busy   :  0:08:04  ( 60.00%)
   taper busy   :  0:04:12  ( 31.21%)
 0 dumpers busy :  0:05:23  ( 40.04%)            not-idle:  0:05:23
(100.00%)
 1 dumper busy  :  0:03:36  ( 26.79%)  client-constrained:  0:02:25
( 67.29%)
                                               start-wait:  0:01:10
( 32.71%)
 2 dumpers busy :  0:04:27  ( 33.15%)  client-constrained:  0:04:27
(100.00%)
====================================================

Above you say you have 20 clients, but this output shows only 1 client.
is this a test.
You can see that from amstatus that no dumper runs in parallel, because
the default parameters have a restriction of only one dumper per client.
When you have more clients, Amanda will run some dumpers in parallel,
speeding up the whole process.


from my understanding amanda is dumping (writing to holding disk) and
taping (writing from holding disk to virtual tapes) at the same time.

Indeed.

Doesn't this reduce speed on the dumper caused by the head-seeks on the
holding disc? Is there a way to prevent this scenario (as long as the
holding disc is big enough for all data that belongs to the job)?

Yes it does reduce the speed indeed.

Optimizing the holdingdisk subsystem is indeed very important for
the newer tapedrives like LTO4 that *need* a sustained feed at 80MB/sec
*at least*, to avoid shoeshining.

Things people often do:
- use a separate controller for disk and tape.
- use a raid striped over several disks as holdingdisk.M
- use large buffers (tapebufs or device_output_buffer_size)
- avoid reading from and writing to the holdingdisk at the same
  time (flush-threshold-dumped)

(other tips are welcome)


--
Paul Bijnens, xplanation Technology Services        Tel  +32 16 397.511
Technologielaan 21 bus 2, B-3001 Leuven, BELGIUM    Fax  +32 16 397.512
http://www.xplanation.com/          email:  Paul.Bijnens AT xplanation DOT com
***********************************************************************
* I think I've got the hang of it now:  exit, ^D, ^C, ^\, ^Z, ^Q, ^^, *
* F6, quit, ZZ, :q, :q!, M-Z, ^X^C, logoff, logout, close, bye, /bye, *
* stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt,  abort,  hangup, *
* PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e,  kill -1 $$,  shutdown, *
* init 0, kill -9 1, Alt-F4, Ctrl-Alt-Del, AltGr-NumLock, Stop-A, ... *
* ...  "Are you sure?"  ...   YES   ...   Phew ...   I'm out          *
***********************************************************************


<Prev in Thread] Current Thread [Next in Thread>