Amanda-Users

Re: chg-manual, span

2003-05-15 23:48:47
Subject: Re: chg-manual, span
From: Gene Heskett <gene.heskett AT verizon DOT net>
To: Thorsten Jungeblut <tj AT hni.uni-paderborn DOT de>, amanda-users AT amanda DOT org
Date: Thu, 15 May 2003 23:46:54 -0400
On Thu May 15 2003 18:46, Thorsten Jungeblut wrote:
>Hi!
>
>I've problems to span one filesystem over multiple tapes.
>For testing I got data of 3 GB to backup to 2GB tapes.
>
>My config is:
>
>org "std"               # your organization name for reports
>dumpcycle 7 days        # the number of days in the normal dump
> cycle tapecycle 10 tapes      # the number of tapes in rotation
> bumpsize 20 MB          # minimum savings (threshold) to bump
> level 1 -> 2 bumpdays     1          # minimum days at each level
>bumpmult     4          # threshold = bumpsize *
> (level-1)**bumpmult runtapes     5         # explained in
> WHATS.NEW

You only have 10 tapes?, yet the dumpcycle is 7 days, and runtapes 
is 5?

That implies the use of at least 35 tapes, and preferably about 70 
in the tapecycle pool.  One really should have 2 full generations 
of backups on hand.  For some reason, the most recent might be 
fubar, and its nice to be able to back up to another cycle.  I use 
DDS2's also, and get them on ebay at quite reasonable prices in the 
vicinity of $20 a box of 10.  Plus the usual exhorbitant shipping 
of course.

>tpchanger "chg-manual" # the tape-changer glue script, see
> TAPE.CHANGERS #changerdev "/dev/null"
>changerfile "/etc/amanda/std/changer"
>tapedev "/dev/nst0"     # Linux @ tuck, important: norewinding
>
>holdingdisk hd1 {
>        directory "/disk1/amanda"
>        comment "main holding disk"
>        use 10000 Mb
>        chunksize 1 GB
>        }
How much _could_ you give it? 10 gigs may be a bit tight.

>[...]
>
>define dumptype standard {
>    comment "standard"
>    no-compress
>    index yes
>    priority medium
>}
>
>disklist:
>
>localhost       /dev/hda5       standard

Ouch.  For numerous reasons one should not use localhost, but use 
instead the FQDN of the machine.  Localhost will bite you, a bit 
higher than the ankles.  Probably at recovery time, which is a bad 
time to find that out according to people who've run into it.

>It is my first try with amanda, so please correct me, too, if I
> make some trivial mistakes. :)
>
>To start complete new run, I do an
>
>amrmtape std STD00 (STD01, ... Tapes i used in tests before)
>amcleanup std
>rm -r /var/lib/amanda/std/*
>rm -r /disk1/amanda/*
>
>(is this the correct way to restart? did i miss something?)
>
>amdump std
>
>(which should now do an level 0 dump, right?)
>
>amstatus std
>
>says, that amanda is dumping the data (everything) to the holding
> disk and is starting to write it to the tape.
>
>then it requires to insert the 2nd tape
>then it requires to insert the 3rd tape (? 3 GB should easily fit
> on 2 2GB tapes?)

It would, IF amanda could span a *single* disklist entry across more 
than one tape.  It cannot.  You didn't mention which dumper you 
were using, dump being limited to whole filesystems, but tar can 
allow one to split the disklist entries up into subdirs which will 
fit.  Just make sure the tar is at least 1.13-19, and 1.13-25 
should be better.  Just plain old 1.13 is broken at recovery time.
Generally speaking, most of this group recommends the use of tar, 
not dump or some cousin of it.

>amdump exits after 0kb of the 3rd tape, log show an error about to
> many tape retries.
>
>->amflush std
>->requires a 4th tape.
>
>tape ist ejected, changer.debug complains about failing to write
> to /dev/tty (I think that should be the request for the next
> tape, which failed, because amflush puts itself to the
> background)
>
>Am I doing anything totally wrong?
>
>Tnx for your help!!

-- 
Cheers, Gene
AMD K6-III@500mhz 320M
Athlon1600XP@1400mhz  512M
99.26% setiathome rank, not too shabby for a WV hillbilly
Yahoo.com attornies please note, additions to this message
by Gene Heskett are:
Copyright 2003 by Maurice Eugene Heskett, all rights reserved.


<Prev in Thread] Current Thread [Next in Thread>