Amanda-Users

Re: Amanda 2.4, not using work area

2006-09-08 15:09:02
Subject: Re: Amanda 2.4, not using work area
From: Brian Cuttler <brian AT wadsworth DOT org>
To: Frank Smith <fsmith AT hoovers DOT com>
Date: Fri, 8 Sep 2006 15:02:44 -0400
Frank,

Good suggestion, it might be that something as old as 2.4 reserved
a lot of space. I was over this issue recently with another (more
recent system) and had thought that "reserve" only came into play
in degraded mode - when the tape drive was unavailable. However it
might be that in 2.4 the rules where somewhat different.

I have set reserve to be 15, will see how amanda progresses in
the next run.

Will post results either way.

                                                thank you,

                                                Brian

On Fri, Sep 08, 2006 at 01:46:34PM -0500, Frank Smith wrote:
> Brian Cuttler wrote:
> > Hello amanda users,
> > 
> > I'm running an old version of amanda 2.4 on a Solaris 9 system
> > with a Solaris 8 client.
> > 
> > Its come to my attention that the two larger client partitions
> > are not being moved through the work area but are being written
> > directly to tape.
> > 
> > The work area is a 70 Gig partition, the client DLEs are on 35 Gig
> > partitions. I'd expect to use the work area even if they did so
> > sequentially, the partitions however are only about 70% occupied,
> > aprox 24 Gig each, so ideally I'd have liked to have seen some
> > parallelism.
> > 
> > From the daily reports I see that the smaller client partitions
> > on both the Solaris 8 and 9 machine (the amanda server does have
> > itself as a client) do utilize the work area.
> > 
> > I do not know what is preventing the work area from being used.
> > I would add more work area if I thought it would help, but I don't
> > see anything screaming "work area capacity" issue.
> 
> If the direct-to-tape DLEs are level 0s, look at the 'reserve' option.
> It tells amanda what percentage of your holdingdisk to save for use by
> incrementals, so in case of tape problems you can run longer because
> you don't fill it up with fulls. I don't remember what it defaults to
> if not specified, but I think it is most of the space.
> 
> > 
> > Here is a question, I assume chunksize appeared around the same
> > time (if not actually with) the ability to split a single DLE
> > across multiple work areas. I see it back in the docs into '98
> > or more but I'm not sure when it first appeared. Is there a list
> > of what version which features where added, other than the changelog
> > installation file ?
> 
> Chunksize was a workaround for writing dumps to disk larger than the
> system's max file size (which was 2GB on many machines at the time).
> I think support for multiple holding disks was added later.
> 
> Frank
> 
> > 
> > Anyway it doesn't look like a work area capacity issue. What, other
> > than adding chunksize to my amanda.conf and perhaps adding additional
> > work area can I do to investigate this issue.
> > 
> > There does not seem to be any output in the /tmp/amanda/* files
> > showing which DLEs will be work area and which will not, where else
> > can I look for an explaintation/solution to this issue ?
> > 
> >                                             thank you,
> > 
> >                                             Brian
> > ---
> >    Brian R Cuttler                 brian.cuttler AT wadsworth DOT org
> >    Computer Systems Support        (v) 518 486-1697
> >    Wadsworth Center                (f) 518 473-6384
> >    NYS Department of Health        Help Desk 518 473-0773
> > 
> 
> 
> -- 
> Frank Smith                                      fsmith AT hoovers DOT com
> Sr. Systems Administrator                       Voice: 512-374-4673
> Hoover's Online                                   Fax: 512-374-4501
---
   Brian R Cuttler                 brian.cuttler AT wadsworth DOT org
   Computer Systems Support        (v) 518 486-1697
   Wadsworth Center                (f) 518 473-6384
   NYS Department of Health        Help Desk 518 473-0773


<Prev in Thread] Current Thread [Next in Thread>