Veritas-bu

[Veritas-bu] PCI bus and HBA throughput

2003-04-26 05:43:38
Subject: [Veritas-bu] PCI bus and HBA throughput
From: jmcdon23 AT csc.com DOT au (jmcdon23 AT csc.com DOT au)
Date: Sat, 26 Apr 2003 19:43:38 +1000
Hi

Interesting but have you made allowance for hardware compression?
Are you measuring the compressability rate of all those "zeroes"?
Where is the bottleneck - at  the tapehead or compression firmware?
Try the same exercise with data that has been already compressed.

Regards
Jim McDonald

----------------------------------------------------------------------------------------

This email, including any attachments, is intended only for use by the
addressee(s) and may contain confidential and/or personal information and
may also be the subject of legal privilege. Any personal information
contained in this email is not to be used or disclosed for any purpose
other than the purpose for which you have received it. If you are not the
intended recipient, you must not disclose or use the information contained
in it. In this case, please let me know by return email, delete the message
permanently from your system and destroy any copies.
----------------------------------------------------------------------------------------





"Shafto, Eric" <Eric.Shafto AT drkw DOT com>@mailman.eng.auburn.edu on 
04/26/2003
12:42:50 AM

Sent by:    veritas-bu-admin AT mailman.eng.auburn DOT edu


To:    "'Paul Winkeler'" <pwinkeler AT pbnj-solutions DOT com>, "'Vijay Korde'"
       <vijay_korde AT hotmail DOT com>
cc:    veritas-bu AT mailman.eng.auburn DOT edu
Subject:    RE: [Veritas-bu] PCI bus and HBA throughput



It  would be interesting to see whether the situation improves noticeably
with  multiple HBAs. Does anyone have some data to share?
-----Original Message-----
From: Paul Winkeler  [mailto:pwinkeler AT pbnj-solutions DOT com]
Sent: Friday, April 25, 2003  8:57 AM
To: 'Vijay Korde'
Cc:  veritas-bu AT mailman.eng.auburn DOT edu
Subject: RE: [Veritas-bu] PCI bus  and HBA throughput


Hi  Vijay

Here  is what we found using two other brands of PCI Bus 2Gbit HBA's in
SunFire  880's going to T9904B's:
1)  To a single drive using 2Gbit HBA's, observed peak ~40MByte/sec,
regardless of  33 or 66Mhz bus
2)  Writing 3 streams to 3 drives across the same 2Gbit HBA, observed peak
~100MByte/sec regardless of 33 or 66Mhz bus
During all tests source data was fully cached in RAM  (machine had 8GByte)
and no other processes running (machine had 8  CPU's)

Wconcluded that the bottleneck was likely in the  handling of the SAN
protocols across the HBA.  Our basis for this  conclusion was this:
-  Writing zeroes (/dev/zero) to a single T9940B drive yielded a rate of
69MByte/sec which is basically the upper limit at which StorageTek claims
the  drive can take data in; so we know it can go that fast.
-  Pushing zeroes to 2 drives simultaneously can be done at an aggregate
rate of ~130MByte/sec on the 66MHz bus, closer to ~120MByte/sec on the
33Mhz  bus.
-  Going to 3 simultaneous drives with zeroes yields rates of ~138MByte/sec
and  129MByte/sec respectivel for the 66MHz and 33Mhz buses.
In  other words, the bus speed gave the 66Mhz bus a slight edge but hardly
worth  bothering.  It would be very interesting to see what the results
look  like when you use multiple HBA's simultaneously...

My  advice: don't hang more than 2 drives of a single HBA.

PaulW
www.pbnj-solutions.com IT Solutions  That Stick


If you have received this e-mail in error or wish to read our e-mail
disclaimer statement and monitoring policy, please refer to
http://www.drkw.com/disc/email/ or contact the sender.





<Prev in Thread] Current Thread [Next in Thread>