Amanda-Users

Re: using amanda work

2006-06-27 10:43:33
Subject: Re: using amanda work
From: Jon LaBadie <jon AT jgcomp DOT com>
To: amanda-users AT amanda DOT org
Date: Tue, 27 Jun 2006 10:36:17 -0400
On Tue, Jun 27, 2006 at 10:09:05AM -0400, Brian Cuttler wrote:
> Jon,
> Stefan,
> 
> yup, fixing the execute access in the directory path seems to have
> fixed the problem, I'm not only far ahead of where I was yesterday
> but I have a file in work waiting to flush (I do still need more
> work area as I'm currently dumping something direct to tape) but I
> can see that my dumpers took turns being busy.
> 
> holding space   :  25985401k ( 37.27%)
>  dumper0 busy   :  5:00:35  ( 42.43%)
>  dumper1 busy   :  0:25:27  (  3.59%)
>  dumper2 busy   :  0:04:58  (  0.70%)
>  dumper3 busy   :  0:53:23  (  7.54%)
>  dumper4 busy   :  0:10:39  (  1.50%)
>  dumper5 busy   :  0:30:10  (  4.26%)
>  dumper6 busy   :  0:02:27  (  0.35%)
>  dumper7 busy   :  0:11:12  (  1.58%)
> 
> They show really uneven usage but they where all utilized, a shame
> that I didn't preserve a complete amstatus output from one of the
> previous days.
> 
> It maybe that I'm running an older version 2.4.2, but I haven't seen
> anything in the /tmp/amanda output, the amcheck nor the amstatus that
> would have made the fact that /amanda/work was unavailable explicite
> in any way -- which is not to say that we didn't do something really
> stupid that would have caused any utility mis-function.
> 
> I currently have about 42Gig of free space with a DLE waiting to flush
> and another DLE writing directly to the tape. When the current one 
> finishes the flush will begin, but will the last DLE, not yet started
> wait for the holding space or will it try to start a direct to tape ?
> Since its the last for the day it probably doesn't matter all that much
> from the "when will we be done" perspective, but from the wear and tear
> on the tape drive issue it does.
> 
> Now to see about adding more work area... rebuild the device table
> (manually) to allow for more drives in the fiber array, then get a
> reboot scheduled (its a key system, so this will be fun to schedule).
> 
>                                               thank you,
> 
>                                               Brian
> 
> On Mon, Jun 26, 2006 at 01:29:02PM -0400, Brian Cuttler wrote:
> > Jon,
> > Stefan,
> > 
> > Suddenly...
> > 
> > > cd /amanda/
> > > ls -l
> > total 46344
> > drw-rw-r--   2 root     root        8192 May 22 14:31 lost+found
> > drwxr-sr-x   7 root     staff        512 Aug 22  2005 restore
> > -rw-r--r--   1 root     other    20497920 Jun 19 11:34 trimble-level0.tar
> > -rw-r--r--   1 root     other    3175936 Jun 19 11:34 trimble-level1.tar
> > drw-rw-r--   2 amanda   sys          512 May  5 02:27 work
> > drw-rw-r--   2 amanda   sys          512 Apr 12 09:22 workl
> > drw-rw-r--   2 amanda   sys          512 Apr 11 05:08 workr
> > 
> > Now that isn't right... It used to be right... I don't know what
> > happened here.

Brian,

Paul B has submitted a patch to have amcheck check the execute
status of the holding disk area.  So a future upgrade of your
server will probably have this problem caught.

BTW on your dumper usage, on my recently revised amanda system
I noted low percentage of multiple dumper usage.  I made several
adjustments and it dropped my clock time by more than 50%.  Among
them were upping total number of dumpers (was 4), dumpers per client
(was 1), and spreading my samba backups among all the unix/linux hosts.
I also reconfirmed my spindle numbers so that different parts of one
disk weren't backed up at the same time.

-- 
Jon H. LaBadie                  jon AT jgcomp DOT com
 JG Computing
 4455 Province Line Road        (609) 252-0159
 Princeton, NJ  08540-4322      (609) 683-7220 (fax)

<Prev in Thread] Current Thread [Next in Thread>