Amanda-Users

Re: backup just on holding disks makes many level 0

2006-03-16 04:47:35
Subject: Re: backup just on holding disks makes many level 0
From: Thomas Widhalm <widhalmt AT unix.sbg.ac DOT at>
To: Paul Bijnens <paul.bijnens AT xplanation DOT com>
Date: Thu, 16 Mar 2006 10:43:30 +0100
On Thu, 2006-03-16 at 10:32 +0100, Paul Bijnens wrote:
> On 2006-03-16 09:12, Thomas Widhalm wrote:
> >> And, actually, we did not yet see how many times this DLE
> >> was dumped with level 0.
> >>
> >> Can you post the output of "amadmin config balance" and
> >> "amoverview config" (or mail it to me if it contains
> >> too much private information).
> >>
> >>
> > 
> > [root@amanda root]# amadmin IS balance
> > 
> >  due-date  #fs    orig KB     out KB   balance
> > ----------------------------------------------
> >  3/16 Thu    0          0          0      ---
> >  3/17 Fri    0          0          0      ---
> >  3/18 Sat    0          0          0      ---
> >  3/19 Sun    1    2960985    2960984     -8.7%
> >  3/20 Mon    2   18406150   14296059   +341.0%
> >  3/21 Tue    6    4925020    2194810    -32.3%
> > ----------------------------------------------
> > TOTAL        9   26292155   19451853   3241975
> >   (estimated 6 runs per dumpcycle)
> 
> Amanda expects about 19 GByte of level 0 dumps, and, from
> experience in the last cycle, expects to run 6 times during
> a dumpcycle.
> Didn't you specify a dumpcycle of 8, together with
> runspercycle 0, promising to run 8 times instead of 6?
> Better make sure the crontab entry is in harmony with the
> runspercycle as specified in amanda.conf.
> Or is this IS config one with runspercycle 6?

I switched to dumpcycle of 6 yesterday, because otherwise it would have
overfilled my holdingdisk.

> 
>  From the last cycle, Amanda now looks at the due-dates (the
> date when the next level 0 of a file system is scheduled)
> and calculates how much data that generates.  This actually
> is boils down to the real data in the last dumpcycle, shifted
> in the future.
> So from the above, amanda does not need to dump anything at
> level 0 for the next three days, but she looks ahead and sees
> that on 3/20 she expects to dump 14 Gbyte.  So during the
> next run she will promote some filesystem to that run, in the
> hope that to lower the amount of work on 3/20.  She tries to
> dump 19 GByte / 6 each time.
> 
> But I guess that one of those filesystems due on 3/20 is a
> very large one.  And so Amanda is always out of balance.
> She tries to dump 1/6th each day, promoting everything,
> except the large one. But the work is for nothing because that
> one filesystem is just way bigger than 1/6 of the work.
> 
> Does that make sense in your config?

Oh yes, it really does. In every config there is at least one filesystem
much bigger than all others. If this is a problem for amanda, I
understand my issues.

> 
> 
> > 
> > [root@amanda root]# amoverview IS
> >          date                 03 03 03 03 03 03 03 03 03
> > host     disk                 08 09 10 11 12 13 14 15 16
> > 
> > amanda.e /                     0  0  0  0  0  0  1  0  1
> > amanda.e /boot                    0  0  0  0  0  1  0  1
> > commodo. /                     1  2  2  2  2  2  2  1  1
> > linglab. /                        1  1  1  1  1  0  1  1
> > linglab. /home                    1  1  1  1  1  1  1  1
> > psyserv0 /                        0  0  0  0  0  1  0  1
> > psyserv0 /boot                    0  0  0  0  0  1  0  1
> > psyserv0 /usr                     0  0  0  0  0  1  0  1
> > psyserv0 /var                     0  0  0  0  0  1  0  1
> 
> 
> First the disklist details that were posted were for
> host springfield.edvz.sbg.ac.at, but that is not in the list above.
> So I'm still not 100% sure that host psyserv0 has the correct
> parameters.  But let's assume it is all correct.
> 
> How large is the level 0 dump of linglab. root filesystem?
> Is that the enormous chunk that cannot Amanda has difficulty
> to balance?

This seems correct to me. linglab is rather big.

> 
> 
> > Please remember that IS is just one of, for now, 4 configs which are run
> > daily.
> 
> Are those 4 configs on 4 servers?  Any reason not to consolidate them
> into fewer (19Gbyte is not a large config -- mine needs to dump
> 160 Gbyte compressed over the complete dumpcycle, and I do use vtapes
> for this too.)

4 configs on one server, backing up different hosts each.

> 
> 
> > It seems, that tweaking "reserve 30" and "maxpromoteday 2" did it. Today
> > was just one full backup and this should happen.
> 
> The "reserve 30" will Amanda force in doing only 60% of free holdingdisk
> for level 0, and the rest for incrementals.
> 
> Note that those 30% is calculated each run again, based on the current
> available space in the holdingdisk: if you start one day with 200 Gbyte
> free space, than 60 Gbyte is reserved for incrementals, allowing for
> up to 140 Gbyte of level 0 dumps.  Assuming that Amanda did fill those
> 140 Gbyte with level 0 indeed (and 20 Gbyte incrementals), then the
> next day there is 40 Gbyte free space in the holdingdisk.  So amanda 
> reserves 12 Gbyte for the incrementals, and again may use 28 Gbyte for
> full dumps.  And so on.

Oh. This is really good to know.

> 
> So I still believe this is not a real solution.
> 
> The solution is probably to divide the enormous DLE into smaller,
> manageable pieces.  And use real vtapes instead of dumping to 
> holdingdisk with a misconfigured tapedevice.

Dividing seems good to me, too. I try to get a second server for
backups, which might still take some time. (Our main servers get backed
up with data protector, amanda is used for workstations and some "low-
priority server. You see, it's not my turn to decide on our main
backupstrategy. ;-) )

The new one will be set up with Tao 4 ( = RHEL4) so I can use a newer
amanda version with the newer way of using vtapes. When this one is
running I will try to switch the actual to vtapes, too. For now, it runs
with these settings long enough to keep backups up to date till the new
server comes. If maxpromoteday works out fine, I will set "reserve 0"
back on.

Thanks for you help. I will keep you up to date how it works out the
next days.

Regards,
Thomas

> 
> 
-- 

*****************************************************************
* Thomas Widhalm                             Unix Administrator *
* University of Salzburg                       ITServices (ITS) *
* Systems Management                               Unix Systems *
* Hellbrunnerstr. 34                     5020 Salzburg, Austria *
* widhalmt AT unix.sbg.ac DOT at                     +43/662/8044-6774 *
* gpg: 6265BAE6                                                 *
* http://www.sbg.ac.at/zid/organisation/mitarbeiter/widhalm.htm *
*****************************************************************



Attachment: signature.asc
Description: This is a digitally signed message part