Estimate timeout from <FQDN>
2003-08-07 19:11:29
I have a couple of rather remote hosts whose root partitions dump and
tar just fine. Although I'm confident that the /usr partitions would
successfully dump if I freed enough space on the holding disk, a
generous third of the partition is cached data, in which I have no
interest in backing up.
So I went with user-tar, and the planner wanders into a segmentation
violation because the timeout gets triggered before the estimate is
complete, and the error handler depends on the estimate data structure
to accomplish its mission.
The obvious solution is to increase etimeout, but what's taking so long?
It looks like a dry gtar run is the source of the estimation.
Config files:
==> disklist <==
trogdor twed0s1a nocomp-root -1 local
trogdor twed0s1e nocomp-user -1 local
domicile.mofo.com twed0s1a nocomp-root local
domicile.mofo.com twed0s1e nocomp-user local
strongbad.mofo.com / nocomp-root local
strongbad.mofo.com /usr nocomp-user local
www1.sf.ca.uberconnect.com / nocomp-root -1 fract1
www1.sf.ca.uberconnect.com /usr nocomp-user -1 fract1
fatmail.sf.ca.uberconnect.com / nocomp-root -1 fract1
fatmail.sf.ca.uberconnect.com /usr user-tar -1 fract1
fatmail.ny.ny.uberconnect.com / nocomp-root -1 fract1
fatmail.ny.ny.uberconnect.com /usr user-tar -1 fract1
==> amanda.conf <==
dumpuser "operator" # the user to run dumps under
inparallel 16 # maximum dumpers that will run in parallel (max 63)
dumporder "sssS" # specify the priority order of each dumper
netusage 600 Kbps # maximum net bandwidth for Amanda, in KB per sec
dumpcycle 4 weeks # the number of days in the normal dump cycle
runspercycle 20 # the number of amdump runs in dumpcycle days
tapecycle 25 tapes # the number of tapes in rotation
bumpsize 20 Mb # minimum savings (threshold) to bump level 1 -> 2
bumpdays 1 # minimum days at each level
bumpmult 4 # threshold = bumpsize * bumpmult^(level-1)
etimeout 6000 # number of seconds per filesystem for estimates.
dtimeout 1800 # number of idle seconds before a dump is aborted.
ctimeout 30 # maximum number of seconds that amcheck waits
tapebufs 20
runtapes 1 # number of tapes to be used in a single run of amdump
tapedev "/dev/nosuchdevice" # the no-rewind tape device to be used
rawtapedev "/dev/nosuchdevice" # the raw device to be used (ftape only)
changerdev "/dev/nosuchdevice"
maxdumpsize -1 # Maximum number of bytes the planner will schedule
tapetype DLT # what kind of tape it is (see tapetypes below)
labelstr "^normal[0-9][0-9]*$" # label constraint regex: all tapes must
match
amrecover_do_fsf yes # amrecover will call amrestore with the
amrecover_check_label yes # amrecover will call amrestore with the
amrecover_changer "/dev/nosuchdevice" # amrecover will use the changer
if you restore
holdingdisk hd1 {
comment "main holding disk"
directory "/usr/dumps/amanda" # where the holding disk is
use -2048 Mb # how much space can we use on it
# a non-positive value means:
# use all space but that value
chunksize 2047Mb # size of chunk if you want big dump to be
# dumped on multiple files on holding disks
# N Kb/Mb/Gb split images in chunks of size N
# The maximum value should be
# (MAX_FILE_SIZE - 1Mb)
# 0 same as INT_MAX bytes
}
reserve 30 # percent
autoflush no #
infofile "/usr/local/var/amanda/normal/curinfo" # database DIRECTORY
logdir "/usr/local/var/amanda/normal" # log directory
indexdir "/usr/local/var/amanda/normal/index" # index directory
tapelist "/usr/local/etc/amanda/normal/tapelist" # list of used tapes
define tapetype DLT {
comment "DLT tape drives"
length 20000 mbytes # 20 Gig tapes
filemark 2000 kbytes # I don't know what this means
speed 1536 kbytes # 1.5 Mb/s
}
|
|
|