Bacula-users

Re: [Bacula-users] LVM or separate disks?

2008-10-06 12:29:09
Subject: Re: [Bacula-users] LVM or separate disks?
From: "John Drescher" <drescherjm AT gmail DOT com>
To: "Chris Picton" <chris AT ecntelecoms DOT com>
Date: Mon, 6 Oct 2008 12:26:47 -0400
On Mon, Oct 6, 2008 at 11:24 AM, Chris Picton <chris AT ecntelecoms DOT com> 
wrote:
> On Mon, 2008-10-06 at 09:09 -0400, John Drescher wrote:
>> On Mon, Oct 6, 2008 at 7:18 AM, Alan Brown <ajb2 AT mssl.ucl.ac DOT uk> 
>> wrote:
>> > On Fri, 26 Sep 2008, John Drescher wrote:
>> >
>> >> BTW, I would never use raid0 or LVM (without every PV being raided)
>> >> for backup data that I cared about.
>> >
>> > Spooled data isn't exactly worth keeping. After a bacula restart the
>> > contents of those directories are useless anyway.
>> >
>> I believe the user was considering putting his disk volumes on a raid
>> 0 ( or disk spanning lvm) because his raid5 write speed was too slow.
>>
>> John
>
> Just to report back:
>
> I have decided to go with software raid 5, with ext3
>
> Hardware raid 5 (even though it is proper battery backed hardware raid)
> was too slow (I got a maximum of 60 MB/s throughput)
>
I believe this is a driver issue. Specifically the driver not using
the cache in write mode. I have seen these kinds of problems with
3ware cards as well on some linux message boards as well.

Either way software raid under linux on a modern machine is a good
solution and I recommend it.

Thanks for reporting back.

John

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users

<Prev in Thread] Current Thread [Next in Thread>