On Sunday 22 June 2003 17:13, Peter Kunst wrote:
>SUMMARY (for beginners like me :-)
>
>When you start a new setup and have many disks to backup, migrate
>them by not adding more than ~5 partitions every amanda run (depends
>on size of disks and tapes, too) and see how it goes. Take some
>weeks of testing to see how amanda does it's work.
>
>That's what i have done initially (i guess this is the correct way
>how to do it). After all, when all my disks were included in my
>amanda setup, i want to see what happens when forcing a level 0 on
>all disks. But amanda knows very well what resources are needed to
>fulfill it's tasks (and i was wrong calulating it for myself).
>
>For my setup (more than 400GB for a level 0 dump), using "runtapes
> 2", hardware compression enabled, using LTO1 tapes (~100GB native
> uncompressed size, tapetype defined using 100GB per tape), this
> wasn't enough to do a level 0. So amanda refuses by telling me
> "dumps too big", because i tell it (her) there is only two tapes
> with about 100GB per tape available. This is why i get this "dumps
> too big" errors.
>
>To see the full story and config, read the lines below.
>
>A big thank to Jon LaBadie and Gene Heskett, who gave me the right
>hints on where to search.
>
>Thanks also to Wayne A. Byarlay and Mike Guthrie for their vacation
>replies ;-)
>
>...and here is the rest of the story:
>
>Jon LaBadie wrote:
>> Repeating and rearranging a couple of lines:
>> > > cskdev013 5:42 180538.4 179.0 26
>> > > taper: tape dev013 kb 201132320 fm 27 writing file: short
>> > > write
>>
>> To me, this says 26 DLE's were successfully written to the first
>> tape including 180GB of valid, properly written data. It then
>> tried the 27th DLE and hit the end of the tape at 201GB.
>>
>> > > taper: retrying nfs5-120:/gigd2.0 on new tape: [writing file:
>> > > short write]
>>
>> It think this is the DLE it was trying to write when it failed.
>> Must be bigger than the 20GB that was left on the tape.
>
>agreed. now i get also a clue why enabling hardware compression is a
> bad idea whilst using amanda.
>
>> > > cskdev014 6:01 21182.4 21.0 1
>> > > taper: tape dev014 kb 21690816 fm 1 [OK]
>>
>> That looks like one DLE made it to the second tape, a 21GB DLE.
>
>the next run after this one amanda scheduled this partition again
> for a level 0.
>
>> Did any DLE's fail? It doesn't seem so from what you have shown.
>> That would be in a missing part of the report.
>>
>> If yes, how big were they? Bigger than the 80GB amanda thinks
>> remain on the tape?
>
>Yes, as already stated, around 50 percent (~200GB) failed with
> <subject:>, "dumps too big". I guess this is caused by my tapetype
> definition (~100GB per tape) and "runtapes 2", __and__ (my failure)
> to force a level 0 for all partitions, just to see what happens.
>For the next weeks, i will try a setup with hardware compression
>disabled and some more "runtapes" than 2, but that might not be
> required after ~2 weeks and NOT doing level 0's on all partitions
> for one run. will see...
>
>However, to fulfill requirements for this list, i will append my
>more or less complete setup here:
>
>---------8<---------- amanda.conf ----------->8---------
>
>#
># amanda.conf - Config for my setup on Solaris7
>#
>
>org "dev"
>mailto "amanda"
>dumpuser "amanda"
>
>inparallel 8
>dumporder "BTstBTst"
>netusage 10800 Kbps
>dumpcycle 14 days
>runspercycle 5
You aren't running every night? If you run every night, or at least
the 5 business days, which would make runspercycle 10, then amanda
will have a chance to level out the tape useage in about 2
dumpcycles, at which point runtapes can probably be reduced to 1.
>tapecycle 24 tapes
Not quite enough for my taste, but I'm paranoid.
>bumpsize 20 Mb
>bumpdays 1
>bumpmult 4
>
>etimeout 360
>dtimeout 1800
>ctimeout 30
>
>tapebufs 30
>
>runtapes 2
>tpchanger "stc-changer"
>tapedev "/dev/rmt/0cbn"
>rawtapedev "/dev/null"
>changerfile "/home/amanda/etc/amanda/cskdev/changer"
>changerdev "/dev/rmt/stctl"
>
>maxdumpsize -1
>tapetype LTO-Ultrium1
>labelstr "^dev[0-9][0-9]*$"
>amrecover_do_fsf yes
>amrecover_check_label yes
>amrecover_changer "/dev/rmt/0cbn"
This should probably be the same as 'changerdev' above.
>holdingdisk hd1 { # ~2GB
> comment "main holding disk"
> directory "/space/amanda"
> use -200 Mb
> chunksize 1Gb
>}
>holdingdisk hd2 { # ~20BG
> comment "2nd holding disk"
> directory "/dgig2/amanda"
> use -500 Mb
> chunksize 1Gb
>}
If this is only about 20 Gb, then dumps bigger will go straight to the
tape drive. The ideal would be about 150% of a tape, with a 30%
reserve.
>autoflush yes
>
>infofile "/home/amanda/etc/amanda/dev/curinfo"
>logdir "/home/amanda/etc/amanda/dev"
>indexdir "/home/amanda/etc/amanda/dev/index"
>tapelist "/home/amanda/etc/amanda/dev/tapelist"
>
>define tapetype LTO-Ultrium1 {
> comment "LTO Ultrium1"
> length 100864 mbytes
> filemark 0 kbytes
> speed 14300 kps
>}
>
>define dumptype global {
> comment "Global definitions"
> index yes
> maxdumps 3
>}
>
>define dumptype always-full {
> global
> comment "Full dump of this filesystem always"
> compress none
> priority high
> dumpcycle 0
> index yes
>}
>
>define dumptype root-tar {
> global
> program "GNUTAR"
> comment "root partitions dumped with tar"
> compress none
> index yes
> #exclude list "/var/amanda/exclude.gtar"
> priority low
>}
>
>define dumptype user-tar {
> root-tar
> comment "user partitions dumped with tar"
> priority medium
> index yes
>}
>
>define dumptype high-tar {
> root-tar
> comment "partitions dumped with tar"
> priority high
> index yes
>}
>
>define dumptype comp-root-tar {
> root-tar
> comment "Root partitions with compression"
> compress client fast
> index yes
>}
>
>define dumptype comp-user-tar {
> user-tar
> compress client fast
> index yes
>}
>
>define dumptype holding-disk {
> global
> comment "The master-host holding disk itself"
> holdingdisk no # do not use the holding disk
> priority medium
> index yes
>}
>
>define dumptype comp-user {
> global
> comment "Non-root partitions on reasonably fast machines"
> compress none
> priority medium
> index yes
>}
>
>define dumptype nocomp-user {
> comp-user
> comment "Non-root partitions on slow machines"
> compress none
> index yes
>}
>
>define dumptype nocomp-user-pri {
> comp-user
> comment "Non-root partitions on slow machines - high priority"
> priority high
> compress none
> index yes
>}
>
>define dumptype comp-root {
> global
> comment "Root partitions with compression"
> compress client fast
> priority low
> index yes
>}
>
>define dumptype nocomp-root {
> comp-root
> comment "Root partitions without compression"
> compress none
> index yes
>}
>
>define dumptype comp-high {
> global
> comment "very important partitions on fast machines"
> compress client best
> priority high
> index yes
>}
>
>define dumptype nocomp-high {
> comp-high
> comment "very important partitions on slow machines"
> compress none
> index yes
>}
>
>define dumptype nocomp-test {
> global
> comment "test dump without compression, no /etc/dumpdates
> recording" compress none
> record no
> priority medium
> index yes
>}
>
>define dumptype comp-test {
> nocomp-test
> comment "test dump with compression, no /etc/dumpdates
> recording" compress client fast
> index no
>}
>
>define interface local {
> comment "a local disk"
> use 10000 kbps
>}
>
>define interface hme0 {
> comment "100 Mbps ethernet"
> use 6000 kbps
>}
>
>---------8<---------- disklist ----------->8---------
>
>#
># disklist - amanda definitions for my setup
>#
># File format is:
>#
># hostname diskdev dumptype [spindle [interface]]
>#
>host1 / nocomp-root
>host1 /usr nocomp-user
>host1 /gig nocomp-user
>host1 /gigd0 nocomp-user
>host1 /gigd1 nocomp-user
>host2 / nocomp-root
>host2 /usr nocomp-user
>host2 /gigd0 nocomp-user
>host2 /gigd1 nocomp-user
>host2 /gigd2 nocomp-user
>host2 /gigd3 nocomp-user
>amsrv / nocomp-root -1 local
>amsrv /dgig nocomp-user -1 local
>amsrv /dgig2 holding-disk -1 local
>amsrv /space holding-disk -1 local
>amsrv /gigd0 nocomp-user
>amsrv /gigd1 nocomp-user
>amsrv /gigd2 nocomp-user
>amsrv /gigd3 nocomp-user
>amsrv /gigd4 nocomp-user
>amsrv /gigd5 nocomp-user
>amsrv /gigd6 nocomp-user
>amsrv /gigd7 nocomp-user
>amsrv /gigd8 nocomp-user
>amsrv /gigd9 nocomp-user
>amsrv /gigd10 nocomp-user
>amsrv /gigd11 nocomp-user
>amsrv /gigd12 nocomp-user
>amsrv /gigd13 nocomp-user
>amsrv /gigd14 nocomp-user
>amsrv /gigd15 nocomp-user
>amsrv /gigd16 nocomp-user
>host3 / nocomp-root
>host3 /d0 nocomp-user
>host3 /gigd0 nocomp-user
>host3 /gigd1 nocomp-user
>host4 /space nocomp-user
>host5 / nocomp-user
>host6 /gigd0 nocomp-user
>host6 /gigd1 nocomp-user
>host6 /gigd2 nocomp-user
>host7 / nocomp-root
>host7 /usr nocomp-user
>host7 /disc1 nocomp-user-pri
>host7 /disc2 nocomp-user-pri
>host7 /export/home nocomp-user
>host8 / nocomp-root
>host8 /dore nocomp-user
>host8 /space nocomp-user
>host8 /gigd0 nocomp-user
>host8 /gigd1 nocomp-user
>host8 /gigd2 nocomp-user
>host8 /gigd3 nocomp-user
>host8 /gigd4 nocomp-user
>host8 /gigd5 nocomp-user
>host8 /gigd6 nocomp-user
>host8 /gigd7 nocomp-user
>host8 /gigd8 nocomp-user
>host8 /gigd9 nocomp-user
>host8 /gigd10 nocomp-user
>host8 /gigd11 nocomp-user
>host8 /gigd12 nocomp-user
>host8 /gigd13 nocomp-user
>host8 /gigd14 nocomp-user
>host8 /gigd15 nocomp-user
>host8 /gigd16 nocomp-user
>host9 / nocomp-root
>host9 /sra nocomp-user-pri
>host9 /gigd0 nocomp-user
1: make sure those aliases exist in all host files.
2: Universally non-compressing wastes a bit of tape for those
filesystems that are mainly text and such files. These can be
compressed by using a comp-client-best setting in the dumptype, often
to less than 10% of their original size for the 'best' setting, which
is also call a surveyer and set stakes slow. 'fast' doesn't compress
as well, but is a lot faster. The trade off of course is cpu time to
do the compression, but if this is done on the client, then each
client can be doing its own compression at the same time all the
others are. It will also reduce (theoreticly) the network load due
to the reduced size of the data to be moved to the server.
I first started out doing compression on everything, and reading the
emails I got from amanda. From those I could define those
directories that were compressible, and those that were not as
evidenced by the compression line in the email being reported as some
figure of 100% or above, indicating that the data actually grew.
Timewise, its hardly worth the effort if the data won't crunch to
3/4ths its original size, so I also turned the compression off for
those DLE's that were consistently showing at 85% or above. That was
just a lot of wheel spinning IMO.
Anyway, it sounds like you are beginning to 'get the hang of it' :)
>(client- and setup-names changed for anonymous viewing ;-)
>
>...looks weird, i know, but i still have to support nfs v2 clients,
>means: most of these partitions must be smaller than 8GB while used
>by nfs v2 clients.
>
> Peter
--
Cheers Peter, Gene
AMD K6-III@500mhz 320M
Athlon1600XP@1400mhz 512M
99.26% setiathome rank, not too shabby for a WV hillbilly
Yahoo.com attornies please note, additions to this message
by Gene Heskett are:
Copyright 2003 by Maurice Eugene Heskett, all rights reserved.
|