Amanda-Users

Re: LTO1 tapetype

2003-06-19 18:18:27
Subject: Re: LTO1 tapetype
From: Jay Lessert <jayl AT accelerant DOT net>
To: amanda-users AT amanda DOT org
Date: Thu, 19 Jun 2003 15:16:45 -0700
On Thu, Jun 19, 2003 at 11:33:59PM +0200, Paul Bijnens wrote:
> However, the hardware compression algorithm seems to be a very
> good one: the measured capacity is still about 100 GByte.
> This means that the algorithm does not fall into the known pitfall
> of blindly imposing it's compression engine to an uncompressable
> data stream.  (gnuzip does this too, compress does not.)

That is correct, and should be the case for all LTO Ultrium drives,
they've got big write caches and are supposed to decide block by block
(I don't know how big a "block" is) whether to compress that block or
not.

> If this is really the case, then, maybe it's not necessary
> to disable hardware compression at all.  And maybe, there isn't
> even a possibility to do it (just as there is no setting to
> tune your error correcting bits).

You can definitely disable compression.  For example, in Solaris land,
with an up-to-date factory st driver (or with the HP st.conf), the
l,h,m devices all have compression disabled, and I can make my
drives slow and small (I use HW compression :-) any time I want.
(The default, c, and u devices have compression on).

The 13MB/s tape rate seen here is pretty normal.  The standard
datasheet LTO-1 native sustained spec is 15MB/s, and I routinely get
20MB/s over a 50GB amanda holdingdisk image with SW compression off/HW
compression on.

-- 
Jay Lessert                               jay_lessert AT accelerant DOT net
Accelerant Networks Inc.                       (voice)1.503.439.3461
Beaverton OR, USA                                (fax)1.503.466.9472
> 
> --
> Paul
> 
> 

-- 
Jay Lessert                               jay_lessert AT accelerant DOT net
Accelerant Networks Inc.                       (voice)1.503.439.3461
Beaverton OR, USA                                (fax)1.503.466.9472

<Prev in Thread] Current Thread [Next in Thread>