Thanks to one of our more astute readers keeping me honest.
When I referred to the single parity disk, I was referring to the parity
"overhead", or percentage of parity vs. data. In a standard RAID-5
implementation, you have 4+P, or 25% additional overhead due to parity. In
the IBM SSA RAID, you can have from 2+P (50% overhead) to 15+P (7%
overhead).
There are advantages to certain configurations. To quote from some IBM
documentation I have,
-Small arrays have better data availability (less disks to fail)
-Small arrays are more expensive (more overhead)
-Small arrays are more expensive (more overhead)
-Small arrays have shorter rebuild times
-Small arrays have shorter rebuild times
-Large arrays have more chances of 2 failures (more disks)
-Large arrays have more chances of 2 failures (more disks)
-Large arrays are less expensive
-Large arrays are less expensive
-Large arrays have longer rebuild times
-Large arrays have longer rebuild times
-USE HOT SPARES!
-USE HOT SPARES!
So, like everything there is a tradeoff. As for performance, I stand
So, like everything there is a tradeoff. As for performance, I stand
corrected. The docs actually say to use as many disks as you can.
-Read performance *can* be the aggregate of the disks in the group
-Write performance is impacted by RAID-5
-Write performance is impacted by RAID-5
-Each RAID-5 write results in up to 2 reads + 2 writes (4 I/Os)
-Each RAID-5 write results in up to 2 reads + 2 writes (4 I/Os)
-Read Old Data, Read Old Parity, Write new data, Write new parity
You can increase the write performance by adding "Fast Write Cache" to the
SSA Adapter.
I guess what I was trying to say (however poorly) was that overall, you are
probably better off with a middle of the road array size rather than very
large or very small, given the tradeoffs.
--------------------------------------------------------------
Matt Cleland mcleland AT msiinet DOT com
Matt Cleland mcleland AT msiinet DOT com
Midland Systems, Inc. (618) 345-0864
St. Louis, MO (618) 346-1779 FAX
|