Bacula-users

Re: [Bacula-users] LVM or separate disks?

2008-10-08 13:30:28
Subject: Re: [Bacula-users] LVM or separate disks?
From: "Lukasz Szybalski" <szybalski AT gmail DOT com>
To: "John Drescher" <drescherjm AT gmail DOT com>
Date: Wed, 8 Oct 2008 12:26:55 -0500
On Fri, Sep 26, 2008 at 3:44 PM, John Drescher <drescherjm AT gmail DOT com> 
wrote:
>> I am using a PCIe megaraid SAS controller, and the drives are 7200 RPM SATA.
>>
>> I have just tried reconfiguring the drives as a hardware raid 5 array, and I
>> still only get about 66 MB/sec throughput.  I have tried both ext3, and xfs,
>> with various format and mount options.
>>
> If this is a true hardware raid array and not a biosraid (or fakeraid)
> some controllers default to turn the wirte cache off especially if you
> do not have the battery unit attached. This causes the same effect as
> a software raid with too small of a stripe cache.
>
>>
>> Even using md raid 5, with exactly the same xfs mount options as you, and
>> the same stripe sizes only gives me about 60 MB/sec
>>
>> I am at a loss as to what you are doing differently to me....
>>
>> As soon as I switch to RAID 0, I get over 300 MB/sec throughput.
>
> have you got any hdparm numbers for the individual disks?
>
> hdparm -tT /dev/sdb
>
> Mine get 105MB/s for the 7200.11 drives and 85 MB/s for the 7200.10
>
> Here are some more results of stripe_cache_size. Perhaps increasing
> the 1024 to 2048 or 4096 may help.
>
> This time with a slower single core 2.0Ghz Athlon 64 3200, with the
> same ASUS M2N motherboard and 2GB of memory instead of 8 (I repeated
> the test on the other machine with 16 GB file and got 186MB/s).
>
> # free -m
>             total       used       free     shared    buffers     cached
> Mem:          2010       1992         17          0         53       1647
> -/+ buffers/cache:        291       1718
> Swap:         4424          0       4423
>
> datastore0 ~ # cat /proc/cpuinfo
> processor       : 0
> vendor_id       : AuthenticAMD
> cpu family      : 15
> model           : 79
> model name      : AMD Athlon(tm) 64 Processor 3200+
> stepping        : 2
> cpu MHz         : 2000.000
> cache size      : 512 KB
> fpu             : yes
> fpu_exception   : yes
> cpuid level     : 1
> wp              : yes
> flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
> mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext
> fxsr_opt rdtscp lm 3dnowext 3dnow up pni cx16 lahf_lm svm cr8_legacy
> bogomips        : 4021.83
> TLB size        : 1024 4K pages
> clflush size    : 64
> cache_alignment : 64
> address sizes   : 40 bits physical, 48 bits virtual
> power management: ts fid vid ttp tm stc
>
>
> Here I am using software raid 6 with 6 X 320 GB Segate 7200.10 drives.
>
> datastore0 ~ # cat /proc/mdstat
> Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [multipath]
> md0 : active raid1 sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1] sda1[0]
>      256896 blocks [6/6] [UUUUUU]
>
> md2 : active raid6 sdf4[5] sde4[4] sdd4[3] sdc4[2] sdb4[1] sda4[0]
>      1199283200 blocks level 6, 256k chunk, algorithm 2 [6/6] [UUUUUU]
>
> md1 : active raid6 sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1] sda3[0]
>      46909440 blocks level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU]
>
> unused devices: <none>
>
> The first test starts with the system default stripe cache:
>
> datastore0 ~ # dd if=/dev/zero of=/bigfile bs=1M count=8192
> 8192+0 records in
> 8192+0 records out
> 8589934592 bytes (8.6 GB) copied, 141.612 s, 60.7 MB/s
>
> datastore0 ~ # echo 1024 > /sys/block/md1/md/stripe_cache_siz
> datastore0 ~ # dd if=/dev/zero of=/bigfile bs=1M count=8192
> 8192+0 records in
> 8192+0 records out
> 8589934592 bytes (8.6 GB) copied, 112.452 s, 76.4 MB/s
>
> datastore0 ~ # echo 2048 > /sys/block/md1/md/stripe_cache_size
> datastore0 ~ # dd if=/dev/zero of=/bigfile bs=1M count=8192
> 8192+0 records in
> 8192+0 records out
> 8589934592 bytes (8.6 GB) copied, 65.093 s, 132 MB/s
>
> For reproducibility I run again with 1024
>
> datastore0 ~ # echo 1024 > /sys/block/md1/md/stripe_cache_size
> datastore0 ~ # dd if=/dev/zero of=/bigfile bs=1M count=8192
> 8192+0 records in
> 8192+0 records out
> 8589934592 bytes (8.6 GB) copied, 109.951 s, 78.1 MB/s
>
> Now 4096
>
> datastore0 ~ # echo 4096 > /sys/block/md1/md/stripe_cache_size
> datastore0 ~ # dd if=/dev/zero of=/bigfile bs=1M count=8192
> 8192+0 records in
> 8192+0 records out
> 8589934592 bytes (8.6 GB) copied, 59.2806 s, 145 MB/s
>
>


Just a little follow up on this.

How accurate are these numbers?

The reason I'm asking is that every time I do:
dd if=/dev/zero of=/bigfile bs=1M count=8192

and watch the iostat -m 1  (which shows me the speed of read/write on
a hard drive)
I see that each hdd on my raid 5 writes about 2mb/s, but when I stop
the dd (30sec later) then they speed up to 30-40mb for the next 5-8
sec. Is there a delay when writing that data? Why? If memory is
involved then 145 MB/s is that the memory/hdd processing speed or
actual hdd speed?


Thanks,
Lucas

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users

<Prev in Thread] Current Thread [Next in Thread>