Amanda-Users

Re: Large filesystems...

2003-05-19 03:01:04
Subject: Re: Large filesystems...
From: Gene Heskett <gene.heskett AT verizon DOT net>
To: Jon LaBadie <jon AT jgcomp DOT com>, amanda-users AT amanda DOT org
Date: Mon, 19 May 2003 02:50:24 -0400
On Monday 19 May 2003 02:11, Jon LaBadie wrote:
>On Mon, May 19, 2003 at 02:54:31PM +0930, Richard Russell wrote:
>> > No, you misunderstood Gene.  Amanda CANNOT span a large
>> > filesystem across multiple tapes.  Can not, no way, no how.
>>
>> Oh. Bugger.
>>
>> That is *really* dissappointing. gnu tar has (or at least,
>> appears to have) options that should enable spanning... eg:
>>
>>
>> And so does e2fs dump:
>
>And so do most dumps.
>
>> Could someone explain to me (or refer me to a URL that explains)
>> why Amanda can't use these features to enable multi-tape
>> dumping?
>
>Can't?  Better say doesn't.
>
>Because in amanda, they don't write to the tape drive.  Other
> programs do.  There are several possible reasons, I'll mention
> only one.  How do you have 20 client hosts all dumping
> simultaneously to the same tape drive and use those program's
> multi-tape feature?
>
>> My problem is that I (am planning to) have a single filesystem,
>> which will be around 300Gb in size, but I have a choice between
>> DLT4000 and DLT7000 tapes, at 40 or 70Gb each. I guess I can do
>> the work-around that Jon LaBadie mentioned later in the email I
>> quoted above, but I'd rather not, if I can avoid it. If I have
>> no choice, then rather than explicitly listing X different DLEs,
>> I'd rather be able to say /BIG/*, and have amanda figure out how
>> best to order them. Is that possible?
>
>Lots of amanda users split according to the procedure I outlined.
>My /BIG is only 40GB.  But then my tape is onle 12GB.  I'm sorry
>it will be a "pain" to set up.  Then again, after the 5 minutes
>it took me to do it 3 years ago, I haven't had to do anything
>about it since.  That is a minor part of the configuration.

My / is 40gb, and /usr is 52gb.  Each is about 60% full ATM. My 
tapes are 4gb, and gzip manages to stick over 9gb on a tape on rare 
occasions.  It doesn't average that of course.

Breaking it down to subdirs (and spindle numbers) is the only way it 
could ever work here.

In 7 days, one tape a day, and good use of compression, the about 60 
gigs here does fit *most* of the time.  I could cut that some, as 
part of it is an rsync'd mirror of the firewalls /root, /etc, /home 
and /usr/src directories.

And, in the real world, breaking it down into ever smaller pieces is 
only a problem once, and has no real effect on the nightly runtime.  
In the unlikely event of a recovery, then the required file you 
want is going to be faster to find because the drive can do fsf's 
quickly, and if the file is only 270 megs into that particular tape 
file, thats a lot quicker than it being 3700 megs into the much 
larger tape file it could have been if not broken down into 
individual DLE's.  That alone could save you several hours and make 
the frogs think you are a magician, which never hurts the cause 
AFAIK.

Look at this way, more DLE's means faster recoveries when a user 
accidently deletes the sales files for the company and absolutely 
has to have it back by 2pm.  But I'd expect a half gallon of 
Lynchburg, Kentucky's finest for the service in bailing his butt 
out of all that hot water too.  Thats a prime example of the BOFH 
attitude, and should be used to educate if they are educatable.  If 
they are not, well...  There is always Darwin.  And lest that makes 
me look like a heavy drinker, I was given one of those last July, 
and used about 4" out of it so far.

But, if I don't quit downloading whole 8 disk distros (DSL is nice) 
:), I'm going to be forced to make dumpcycle 8 days to make it all 
fit. :(

-- 
Cheers, Gene
AMD K6-III@500mhz 320M
Athlon1600XP@1400mhz  512M
99.26% setiathome rank, not too shabby for a WV hillbilly
Yahoo.com attornies please note, additions to this message
by Gene Heskett are:
Copyright 2003 by Maurice Eugene Heskett, all rights reserved.


<Prev in Thread] Current Thread [Next in Thread>