Amanda-Users

Re: tapetape (not in faq-o-matic)

2006-05-24 09:05:10
Subject: Re: tapetape (not in faq-o-matic)
From: Brian Cuttler <brian AT wadsworth DOT org>
To: amanda-users AT amanda DOT org
Date: Wed, 24 May 2006 08:57:37 -0400
Jon,

On Tue, May 23, 2006 at 05:40:02PM -0400, Jon LaBadie wrote:
> On Tue, May 23, 2006 at 01:22:02PM -0700, Pavel Pragin wrote:
> > Brian Cuttler wrote:
> > 
> > >Does anyone have the tape type for the LTO3 (Quantum) ?
> > >
> > >Are there any other parameters I should tweak to get better
> > >performance/utilization ?
> > >
> > >This is still a reasonable default ?
> > >
> > >tapebufs 20
> > >
> > >I am running the StorEdge C2 jukebox with lto3 drive on a SunFire 280R
> > >under Solaris 9 with 4 gig of memory.
> > >
> > >
> > Try using this command to determine tapetype:
> > amtapetype -f /dev/nst0   (/dev/nst0) will be diff for solaris i think
> > 
> 
> A tapetype run on an lto-3 drive without a good estimate option
> might take about 14 days to complete :(

That may explain why this was STILL running when I came in this morning.

# date; amtapetype -f /dev/rmt/3n; date
Tue May 23 16:09:49 EDT 2006
Writing 2048 Mbyte   compresseable data:  33 sec
Writing 2048 Mbyte uncompresseable data:  33 sec
Estimated time to write 2 * 1024 Mbyte: 33 sec = 0 h 0 min
wrote 3470778 32Kb blocks in 10614 files in 37354 seconds (short write)
wrote 1085906 32Kb blocks in 6662 files

Truth be told, on this system, which is the amanda server for a
fair number of clients, we disabled SW-compression in favor of
HW-compression on the StorEdge L9 jukebox with LTO drives. We 
had 2 of those, one running amanda against the client==server
and one running amanda for client!=server, both configs ran 5/week.
We also saved a number of slots in one jukebox for a 3rd amanda config
which ran 1/week with always-full.

We found that the nightly would sometimes run a second tape and the
weekly was running onto the 4th tape.

With the failure of one L9 we had to revise the situation. Not that
I wish to speak badly of the L9/LTO, it ran without error (occasional
need to recalibrate, which it does on power up) for close to 5 years
and it was our decision not to place on maintenance. With the failure
of one L9 we moved to a more traditional 5/week for all DLE under the
single amanda config. We are finding that with HW-compression and a
5 day dumpcycle and 5 runs/week we fit all DLE onto a single tape
each night with no difficulty.

Strong argument for the L9/LTO. We will probably add more clients once
the C2/LTO3 is in play, I'm sure it will last us for years... but I'm
going to need more disk space for amanda work area.

Oh, here is a thought/question. Would read performace during dumper
portion be any better if the DLE's chunks where round-robined across
multiple work areas rather than placing them into a single work area
until that work area was filled ?

I hate to ask this, it exposed my ignorance of unix file systems, not
that there is only one to chose from (we have ext3(?), xfs and ufs in
use now for amanda work areas, probably more to follow).

The file open for the chunks, is there a way to pre-allocate the disk
space to reduce file fragmentation/write times, window turns ?

I know, old ODS-2 terms, back when I'd make sure to specify the file
extention size when doing something like this. Is there a unix equiv ?

YUMV (your unix may vary).

                                                thank you,

                                                Brian