Networker

Re: [Networker] tape capacity

2004-04-09 13:09:24
Subject: Re: [Networker] tape capacity
From: Wes Ono <wono AT LEGATO DOT COM>
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Date: Fri, 9 Apr 2004 08:42:59 -0700
A few things to note:

1) NetWorker counts the number of bytes sent to the tape drive.
2) You can't compress data more than once.
2a) If you try more than once, the first time it will compress, and the
second may cause it to expand a little.
2b) Compressibility is totally dependent on the data.

Here's an example:

Assume:  200 GB of data on disk, 2:1 compressible, 200/400 GB
drives/cartridges

a) UNIX standard directives: NetWorker will count 200 GB sent to the tape.
The drive will compress and write 100 GB on the tape.  Back up twice to this
cartridge, and it will indicate full after 400 GB.

b) UNIX with compression directives: NetWorker will compress the data and
send 100 GB to the tape.  The drive won't compress it any more and will
write that 100 GB to the tape.  Back up twice to this cartridge, and it will
indicate full after 200 GB.  (It may be a bit less because of #2a above.)

Note that you will have backed up 400 GB of real data to the cartridge in
both cases.

So, the results that you and Maarten are seeing are typical.

Hope this helps,

Wes

-----Original Message-----
From: me [mailto:lkme4me AT YAHOO DOT COM]
Sent: Friday, April 09, 2004 10:20 AM
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Subject: Re: [Networker] tape capacity


Well the device is as follows, /dev/rmt/tps70d2nrnsvc

So it looks like it is set for compression at the
drive (I think).

Would selecting compression directives at the client
level in addition to compression at the drive level be
causing this? would one over ride the other?

I will turn it off at the client and see what happens
but I just thought I would ask in any event.

thanks again



--- Darren Dunham <ddunham AT TAOS DOT COM> wrote:
> > Well these last two responses clarify some issues.
> >
> > I am using LTO Ultrium 2 tapes so I should be
> seeing
> > the (200/400) figure.
> >
> > As for "where" the compression is done it seems
> that I
> > am not doing the correct thing here. The
> compression
> > is at the client and the figures that Maarten
> stated
> > "190 or even 180" seems to be what I am seeing.
> >
> > So how do I set the compression at the drive and
> not
> > the client?
>
> Use the "normal" directives rather than compression
> directives and the
> client will not attempt to compress.
>
> The drive compression depends on your OS.  On
> solaris you do it by
> configuring the "compression" device into Networker.
>  That's usually a
> tape device with 'c' or 'u' like /dev/rmt/0cbn.
>
> --
> Darren Dunham
>    ddunham AT taos DOT com
> Senior Technical Consultant         TAOS
> http://www.taos.com/
> Got some Dr Pepper?                           San
> Francisco, CA bay area
>          < This line left intentionally blank to
> confuse you. >
>
> --
> Note: To sign off this list, send a "signoff
> networker" command via email
> to listserv AT listmail.temple DOT edu or visit the list's
> Web site at
> http://listmail.temple.edu/archives/networker.html
> where you can
> also view and post messages to the list.
>
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=


__________________________________
Do you Yahoo!?
Yahoo! Small Business $15K Web Design Giveaway
http://promotions.yahoo.com/design_giveaway/

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

<Prev in Thread] Current Thread [Next in Thread>