Re: dumps fail: data timeout
2002-08-28 13:28:49
On Wednesday 28 August 2002 10:49, Amy Tanner wrote:
>I'm getting data timeout errors on filesystems of just 2 of the
> machines our amanda server backs up. The
> /tmp/amanda/sendbackup*debug file does not report errors but also
> does not list a finish time. For example:
>
>sendbackup: debug 1 pid 27285 ruid 95 euid 95 start time Wed Aug
> 28 02:29:21 2002
>/usr/local/libexec/sendbackup: got input request: DUMP /u02 0
> 1970:1:1:0:0:0 OPTIONS |;bsd-auth; parsed request as: program
> `DUMP' disk `/u02' lev 0 since 1970:1:1:0:0:0 opt `|;bsd-auth;'
> waiting for connect on 2718, then 2719
> got all connections
>sendbackup: spawning "/sbin/dump" in pipeline
>sendbackup: argument list: "dump" "0usf" "1048576" "-"
> "/dev/ida/c0d3p1"
>
>There are several dump and sendbackup processes still running.
> I've tried killing all the dump & sendbackup processes, and
> waiting for the nightly dump, but the next day I still see
> failures on one ore more filesystems on the same 2 machines.
> But it's not always the same filesystems where it fails.
It rather sounds like you need to allow more time for the process to
run. Thats configurable with a couple of variables in your
amanda.conf file, along with some explanatory test describing them.
>I'm using dump with hardware compression (no software compression)
> on all machines & file systems.
The use of hardware compression as opposed to software compression
can also be a course of 'gotchas'. When hardware compression is
used, amanda doesn't have a very good idea how much data a tape can
hold, so you have to set the tape capacities conservatively, which
on big tapes can result in gigabytes of under utilization.
With the hardware compression turned off, a run of the tapetype
program can determine the tapes capacity very accurately. Amanda
then counts the bytes *after* the software compressors have had
their way with the data. We highly recommend turning off the
hardware, and then running the compressor on whichever machine has
the horsepower to do it best, server or client. Obviously when
doing a large network, even a fast server will need to offload that
duty to the clients as much as possible even if they are slower
because 20 clients all doing their thing simultainiously are still
going to finish faster than one server trying to do 20 clients
worth of compression serially.
I use tar, and have 37 entries in my disklist. 7 of those entries
regularly compress to less than 20% of their source size. I run
compression for about half as I've turned it off for those
partitions containing already compressed stuff, which will expand
when attempting to re-compress. This effect is also true of the
hardware compressors. The emails you get from amanda will tell you
by the compression ratios is shows, those that need to be run raw.
--
Cheers, Gene
AMD K6-III@500mhz 320M
Athlon1600XP@1400mhz 512M
99.13% setiathome rank, not too shabby for a WV hillbilly
|
|
|