Amanda-Users

Re: RAIT in 2.4.3b4

2003-01-23 20:36:28
Subject: Re: RAIT in 2.4.3b4
From: Gene Heskett <gene_heskett AT iolinc DOT net>
To: Scott Mcdermott <smcdermott AT questra DOT com>, amanda-users AT amanda DOT org
Date: Thu, 23 Jan 2003 20:03:27 -0500
On Thursday 23 January 2003 15:12, Scott Mcdermott wrote:
>docs/RAIT says two things that I'm confused about:
>
>   - "[RAIT supports only 3 or 5 drive configurations.]"  I have
> four drives and no capacity for another in my library... this
> means I should just remove one of my drives right?
>
>   - "currently it is only integrated with the chg-manual script."
> Ok this means I have no hope of using my library automatically
> (eg, as I could with chg-mtx if not using RAIT) ?
>
>I'm also curious about a couple of things:
>
>   - is there a source tree more recent than 2.4.3b4 (which is
> from last August?) that I might be able to pull from? This RAIT
> stuff appears to be a little bleeding edge and I want to be sure
> to have the code with all the latest bugfixes...

Goto the amanda.org web page, and quite a ways down the page you'll 
find a link that says something like latest snapshots *here* which 
is a umontreal.edu address.  You'll find both 2.4.3 and 2.5.0 
there.  I'm currently running the one dated 20030117 without any 
problems here.

Bookmark it, you'll need it again if you want to stay up with the 
rest of us that like to run the bleeding edge stuff. And make a 
script out of your configuration options so you can re-create them 
for every new version & there won't be any surprises when you 
update it.

>   - is anyone using RAIT in a production environment?
>
>   - with just the tapes and no Amanda, is it possible to
> "destripe" a RAIT set or otherwise get at the data?
>
>   - is there a 2.5 tree somewhere with additional features?
>
>Thanks.
>[This E-mail scanned for viruses by Declude Virus]

-- 
Cheers, Gene
AMD K6-III@500mhz 320M
Athlon1600XP@1400mhz  512M
99.22% setiathome rank, not too shabby for a WV hillbilly

<Prev in Thread] Current Thread [Next in Thread>