hi....
> The best program to spot bottlenecks and have an overview for
> the performance of the whole backup process is "amplot".
ok, than this will be my way to go. thanks for the tip :-)
> It takes some time to get it running, and study the output, but
> it's really worthwhile.
from the manpage it 'looks' pretty simple, since amplot has only a few
options. what are the caveats?
> >
> > right now I'm looking at the 'amstatus' of my daily backup job, which
> > gives me the following output
> > ====================================================
> > Using /var/log/amanda/daily/amdump from Mon Nov 17 08:31:51 CET 2008
> >
> > 172.31.2.10:/daten/business 0 11085m dumping 4380m ( 39.52%)
> > (8:38:25)
> > 172.31.2.10:/daten/company 1 16m finished (8:38:25)
> > 172.31.2.10:/daten/intern 0 24001m wait for dumping
> > 172.31.2.10:/daten/software 1 242107m wait for dumping
> > 172.31.6.10:/daten/blub 0 1623m finished (8:40:33)
> > 172.31.6.10:/daten/business 1 0m finished (8:37:14)
> > 172.31.6.10:/daten/doku 0 5413m finished (8:45:19)
> >
> > SUMMARY part real estimated
> > size size
> > partition : 7
> > estimated : 7 284248m
> > flush : 0 0m
> > failed : 0 0m ( 0.00%)
> > wait for dumping: 2 266108m ( 93.62%)
> > dumping to tape : 0 0m ( 0.00%)
> > dumping : 1 4380m 11085m ( 39.52%) ( 1.54%)
> > dumped : 4 7054m 7054m (100.00%) ( 2.48%)
> > wait for writing: 0 0m 0m ( 0.00%) ( 0.00%)
> > wait to flush : 0 0m 0m (100.00%) ( 0.00%)
> > writing to tape : 0 0m 0m ( 0.00%) ( 0.00%)
> > failed to tape : 0 0m 0m ( 0.00%) ( 0.00%)
> > taped : 4 7054m 7054m (100.00%) ( 2.48%)
> > tape 1 : 4 7054m 7054m ( 3.44%) ERNW-daily02
> > 9 dumpers idle : client-constrained
> > taper writing, tapeq: 0
> > network free kps: 248976
> > holding space : 408372m ( 96.12%)
> > chunker0 busy : 0:00:00 ( 0.00%)
> > chunker1 busy : 0:00:00 ( 0.00%)
> > dumper0 busy : 0:05:25 ( 40.25%)
> > dumper1 busy : 0:08:04 ( 60.00%)
> > taper busy : 0:04:12 ( 31.21%)
> > 0 dumpers busy : 0:05:23 ( 40.04%) not-idle: 0:05:23
> > (100.00%)
> > 1 dumper busy : 0:03:36 ( 26.79%) client-constrained: 0:02:25
> > ( 67.29%)
> > start-wait: 0:01:10
> > ( 32.71%)
> > 2 dumpers busy : 0:04:27 ( 33.15%) client-constrained: 0:04:27
> > (100.00%)
> > ====================================================
>
> Above you say you have 20 clients, but this output shows only 1 client.
> is this a test.
> You can see that from amstatus that no dumper runs in parallel, because
> the default parameters have a restriction of only one dumper per client.
> When you have more clients, Amanda will run some dumpers in parallel,
> speeding up the whole process.
20 all together. the output above is from my daily backup job, which
covers only the data of 2 file servers. all other hosts store less
relevant data, so they're backed up only once a week.
> > from my understanding amanda is dumping (writing to holding disk) and
> > taping (writing from holding disk to virtual tapes) at the same time.
> Indeed.
>
> > Doesn't this reduce speed on the dumper caused by the head-seeks on the
> > holding disc? Is there a way to prevent this scenario (as long as the
> > holding disc is big enough for all data that belongs to the job)?
>
> Yes it does reduce the speed indeed.
*DAMN*
> Optimizing the holdingdisk subsystem is indeed very important for
> the newer tapedrives like LTO4 that *need* a sustained feed at 80MB/sec
> *at least*, to avoid shoeshining.
ok, that doesn't bother me, since I use HD with a virtual tape library
to backup my data, and the holding disc is physically another drive than
the one storing the data.
> Things people often do:
> - use a separate controller for disk and tape.
> - use a raid striped over several disks as holdingdisk.M
> - use large buffers (tapebufs or device_output_buffer_size)
> - avoid reading from and writing to the holdingdisk at the same
> time (flush-threshold-dumped)
ok, I read the amanda.conf(5) man-page on that parameter, but I don't
understand what is meant with the term 'volume'. Is it a tape, is it the
total amount of data for this backup-job???
please help me clear this.
greetz
olli
|