Amanda-Users

RE: tape_splitsize 20 Gb and 200GB LTO2 Tapes

2006-08-14 18:15:59
Subject: RE: tape_splitsize 20 Gb and 200GB LTO2 Tapes
From: "Gardiner Leverett" <leverett AT mobiusmicro DOT com>
To: "'Gavin Henry'" <ghenry AT suretecsystems DOT com>
Date: Mon, 14 Aug 2006 17:06:43 -0400
 

> -----Original Message-----

> 
> <quote who="Gardiner Leverett">
> > I've got a question for you (since I'm about to implement
> > the exact same thing): does the dump have to be tar'd?
> 
> I'm copying this to list for others, hope you don't mind?
> 
> We are using tar, but not gzip'd. We are using the hardare 
> compression.

I had hardware compression turned off. 


> Make sure you use split_diskbuffer, or you'll get a lot of:
> 
> taper: no split_diskbuffer specified: using fallback split 
> size of 10240kb to
> buffer localhost:/root.0 in-memory
> 
> > I am trying to dump about 80G of data from across the
> > country (with T1's at both ends), and the throughput is
> > coming at 161kb/s, which is trying to say it'd be 4 days
> > to dump all this data (when it used to work in one weekend
> > over 40 hours).
> 
> Whoa ;-)

Yeah, that's what I said! Something is very wrong here. 

> 
> > I want to implement the same thing with
> > splitting across lto2 tapes (but I don't have a holding disk,
> > and I'm about to add a usb one).  I think the tar is what's
> > killing this, so I just want to do a full dump, into 2gb chunks
> > and save those to tape.
> 
> Plain tar or with compression?
> 
Plain tar. 


<Prev in Thread] Current Thread [Next in Thread>