Re: large filesystem timeouts - is there a better way
2006-10-19 18:45:10
Hello,
You can add this line to the dumptype definition that you are using and
this will help speed up the estimation.
estimate calcsize
Pavel
Hi
We have a couple of large filesystems that we would like to backup
using Amanda. We are currently using it to backup several hundred
other smaller systems very successfully, but the two systems that we
have struggled to get working are the two largest and most important.
Each file system is ~500GB
The error is the timeout,
ev 0 FAILED [Estimate timeout from
which I understand the meaning of, we have increased it (etimeout)
several times, we are now up to 3hours and still getting a timeout. My
question is the following, is there a way using amcheck or some other
means to find out a head of time what this value is, for a particular
client filesystem? Or should I just set it really large like 24hours
and see what comes back as the estimate.
The next question than is if it does really take many hours just to
get an estimate from these two system, Is there a better way to speed
up the estimate for these large filesystems, is anybody else using
amanda to backup directories trees as large as 500GB.
The Amanda server is 2.4.5p1 and the two clients are 2.4.3 and 2.4.4p1
Thanks in advance
Steve
|
<Prev in Thread] |
Current Thread |
[Next in Thread> |
Re: large filesystem timeouts - is there a better way,
Pavel Pragin <=
|
|
|