Amanda-Users

Re: Large filesystems...

2003-05-19 02:13:28
Subject: Re: Large filesystems...
From: Jon LaBadie <jon AT jgcomp DOT com>
To: amanda-users AT amanda DOT org
Date: Mon, 19 May 2003 02:11:20 -0400
On Mon, May 19, 2003 at 02:54:31PM +0930, Richard Russell wrote:
> > No, you misunderstood Gene.  Amanda CANNOT span a large 
> > filesystem across multiple tapes.  Can not, no way, no how.
> 
> Oh. Bugger.
> 
> That is *really* dissappointing. gnu tar has (or at least, appears to
> have) options that should enable spanning... eg:
> 
> 
> And so does e2fs dump:
> 


And so do most dumps.

> Could someone explain to me (or refer me to a URL that explains) why
> Amanda can't use these features to enable multi-tape dumping?


Can't?  Better say doesn't.

Because in amanda, they don't write to the tape drive.  Other programs
do.  There are several possible reasons, I'll mention only one.  How do
you have 20 client hosts all dumping simultaneously to the same tape
drive and use those program's multi-tape feature?

> My problem is that I (am planning to) have a single filesystem, which
> will be around 300Gb in size, but I have a choice between DLT4000 and
> DLT7000 tapes, at 40 or 70Gb each. I guess I can do the work-around that
> Jon LaBadie mentioned later in the email I quoted above, but I'd rather
> not, if I can avoid it. If I have no choice, then rather than explicitly
> listing X different DLEs, I'd rather be able to say /BIG/*, and have
> amanda figure out how best to order them. Is that possible?


Lots of amanda users split according to the procedure I outlined.
My /BIG is only 40GB.  But then my tape is onle 12GB.  I'm sorry
it will be a "pain" to set up.  Then again, after the 5 minutes
it took me to do it 3 years ago, I haven't had to do anything
about it since.  That is a minor part of the configuration.

-- 
Jon H. LaBadie                  jon AT jgcomp DOT com
 JG Computing
 4455 Province Line Road        (609) 252-0159
 Princeton, NJ  08540-4322      (609) 683-7220 (fax)

<Prev in Thread] Current Thread [Next in Thread>