Amanda-Users

Re: dumpcycle versus noinc

2005-03-16 20:38:43
Subject: Re: dumpcycle versus noinc
From: Peter Kunst <pkunst AT imagnos DOT com>
To: Steve Wray <stevew-lists AT catalyst.net DOT nz>
Date: Thu, 17 Mar 2005 02:28:55 +0100
Steve Wray wrote:
Peter Kunst wrote:

Hi Steve,

Steve Wray wrote:

Gaby vanhegan wrote:

Hello again!

Whilst setting up a full dump configuration to do monthly full dumps to tape, I'm torn between either:

    strategy "noinc"

or

    dumpcycle 0

To do a full dump. What's the difference here? Does it matter which one I use?

Heres my reading of what I've experienced from experimenting with these;

Suppose that a filesystem has grown too large to fit on the tape.

noinc will simply not dump that filesystem at all on that run.

dumpcycle 0 will fall back to incremental for that filesystem.

so when will amanda do a level 0 for that DLE, if that result for
dumpcycle 0 would be true ? in normal cases, amanda would simply say
"dumps too large" or something like that.

This is where amanda starts 'optimising' which tape it wants next based on when it last did a full dump and when the incrementals happened.

If I understand it right, it would try to schedule the zero-level for the next run. If again it couldn't fit, it would fall back some more.

well, amanda will not dump a DLE, if a level 0 doesn't fit on a single
tape, as long as tape spanning isn't in there, as far as i understood.

I was just mentioning this because having a backup completely *fail* when using noinc can be very unfortunate
:-/

which amanda version do we speak about ?


its debian sarge, and dpkg -s tells me;
2.4.4p3-2

...just using 2.4.4 here in "production" on Solaris.

if you tell us your amanda runs "completely fail", try adding DLE's
one by one for each day. amanda will try to spread level one's over
it's dumpcycle's. Even then, it will not work if a single DLE doesnt't
fit on a single tape. Try split larger partitions into TAR dumptypes
that fit onto a single tape.

 Cheers, Peter

<Prev in Thread] Current Thread [Next in Thread>