Networker

Re: [Networker] tape drive hardware compression a bad thing?

2003-01-29 12:37:10
Subject: Re: [Networker] tape drive hardware compression a bad thing?
From: Shaun Ellis <sellis AT LEGATO DOT COM>
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Date: Wed, 29 Jan 2003 09:34:29 -0800
I am the product manager for devices in NetWorker.

Legato Supports hardware compression.

Shaun Ellis
NetWorker Management & OpenVMS Product Manager
LEGATO Systems Inc
3210, Porter Drive
Palo Alto
Ca 94304

Phone: +1 (650) 842 9548
Mobile: +1 (408) 431 6997


-----Original Message-----
From: Stan Horwitz [mailto:stan AT temple DOT edu]
Sent: Wednesday, January 29, 2003 9:31 AM
To: Legato NetWorker discussion; Jim Lane
Subject: Re: [Networker] tape drive hardware compression a bad thing?

On Wed, 29 Jan 2003, Jim Lane wrote:

> is there any way I could get feedback as to how well hardware or
> software compression is working? it might be nice to compare in case one
> proves to be better than the other. as it is I'd never know.

Such comparisons would be of dubious value because its sensitive to the
type of data you have to back up, network bandwidth, and the bandwidth of
your tape drive's data ports. In my situation, lots datasets, email and
text-based web pages, compression with hardware is amazing. We often get
50% - $100 more compression than the rated compression of our AIT2 drives.

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

<Prev in Thread] Current Thread [Next in Thread>