Amanda-Users

Re: "estimate failed" question

2006-10-23 12:05:28
Subject: Re: "estimate failed" question
From: Jon LaBadie <jon AT jgcomp DOT com>
To: amanda-users AT amanda DOT org
Date: Mon, 23 Oct 2006 11:58:41 -0400
On Mon, Oct 23, 2006 at 05:44:15PM +0200, Paul Bijnens wrote:
> On 2006-10-23 17:08, McGraw, Robert P. wrote:
> > I am getting the following messages.
> > 
> >   planner: disk coriolis:/local, estimate of level 1 failed.
> [...]
> > 
> > My report shows:
> > 
> >   coriolis   /local      0     0     0    --   0:01  11.0   0:04   16.7
> [...]
> > 
> > My question; since the estimate failed for level 1, my report show that it
> > did a level 0. Is this in fact what happened. Since it could not do a level
> > 1 it just went ahead and did a level 0? Just want to be sure I am reading
> > the report correctly.
> 
> The planner of Amanda will always estimate the size of level 0,
> level N (the last level) and level N+1 (if "bumpdays" permits).
> 
> Using all those numbers, the planner creates a plan for the backup.
> Planner starts with all the level 0's that are due, and the level N
> of all the rest (or N+1 if the "bump.*" parameters got triggered for
> this DLE).
> If the resulting total amount is too large for the output media,
> then Amanda will postpone some full dumps, and schedule a level N
> instead for some of those.
> Aparently Amanda did not feel the need for such a reordering.
> So the missing estimates for level 1 did not interfere with the
> planner algorithm.
> 
> To be complete, when there is enough space, and the total amount
> of level 0 dumps is less than "balanced" (= total amount of level 0
> divided by runspercycle), then Amanda will schedule some full dumps
> in advance to the due date.
> 

Robert,
to support Paul's comments, did the report indicate these
DLE's were "promoted".

-- 
Jon H. LaBadie                  jon AT jgcomp DOT com
 JG Computing
 4455 Province Line Road        (609) 252-0159
 Princeton, NJ  08540-4322      (609) 683-7220 (fax)

<Prev in Thread] Current Thread [Next in Thread>