Veritas-bu

[Veritas-bu] (no subject)

2006-05-23 21:31:58
Subject: [Veritas-bu] (no subject)
From: Peters.Devon at con-way.com (Peters, Devon C)
Date: Tue, 23 May 2006 18:31:58 -0700
I've been doing some testing with a Sun V40z server running RHEL4, so I
figured I could share some of my experiences with you...

The V40z has the following
2   Opteron 844 CPUs
8GB RAM
2   Intel Quad Pro/1000MT cards (8 1Gbit ports total)
2   Emulex LP10000DC HBAs (4 2Gbit ports total)
18  OPEN-V LUNs (Hitachi USP600) - multipathed over 2 HBA ports
4   IBM 2Gbit fibre LTO3 tape drives - 2 drives per HBA port

The NICs are plugged directly into the Cisco 6500 core switch, and all
fibre (HBA, Drives, Storage) is connected to a McData Director (don't
know the model).

I did some network testing, using iperf, on one of the Intel Quad cards
(did the testing before I got the 2nd card), and found that one port can
sustain about 120MB/s, 2 ports can sustain 240MB/s, 3 ports can sustain
about 340MB/s, and 4 ports can sustain about 340MB/s.  I did this
testing with and without using channel bonding, and had similar results.
I would suspect that the second card will double this performance,
because each card is installed on a separate dedicated PCI bus.

For channel bonding software, I just used what is included in the Linux
kernel.  Specifically 802.3ad and balance-alb.  The 802.3ad (aka LACP)
requires the switch to support the protocol, while balance-alb is a
software only method that doesn't require anything special from the
switch.  I found that both protocols seem to perform fine, though there
is slightly more CPU overhead with balance-alb.  When the interfaces
were configured with channel bonding they were able to achieve the same
throughput as they did when each interface was configured with a
separate IP address.

When testing the drives (dd from /dev/zero to the drive), I was able to
get:

- 165MB/s to one tape drive
- 174MB/s to two tape drives - if the drives are on the same HBA ports
- 330MB/s to two tape drives - if the drives are on separate HBA ports

So, with LTO3 tape drives you probably only want to put one drive per
2Gb port.  If you're using the newer 4Gbit drives, then you could
probably put 2 drives on a single 4Gbit HBA without hampering the
performance of the drives.

Now, for the throughput when actually using NetBackup...  I started
backups of 40 clients pretty much evenly spread across 6 switches (each
uplinked to the 6500 core via a 1Gbit link) using an MPX level of 10.
The backups achieved a sustained aggregate throughput around 350MB/s (as
measured by the incomming data rate on the 8 network interfaces).

At this point I think the tape drives, or rather the 2 2Gbit HBAs that
the drives are connected to, are the bottleneck.  I'm going to have my
SAN folks re-zone things, so that each of the tape drives will be on a
separate HBA port (getting rid of the Hitachi Storage).  Then I'll rerun
the test and see if that buys any more throughput...

-Devon


On 5/10/06, Dean <dean.deano at gmail.com> wrote:
>
> Hi there.
>
> Does anyone have any real world experience with using Redhat as a
Media
> Server, with a quad port Gigabit NIC, using some kind of trunking on
the
> NIC? I'm trying to design a Media Server that will be able to receive
data
> quickly enough over IP to push a couple of fast tape drives, like
LTO3. I'd
> like to be able to reliably get 200 MB/s or more from the network.
>
> I guess my questions are, is this realistic? And, how is the trunking
done?
> In the switch, or is Linux able to do it, like Solaris?
>
> Thanks in advance for any insight.
>



-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
http://mailman.eng.auburn.edu/pipermail/veritas-bu/attachments/20060523/d83fc4e5/attachment.html

<Prev in Thread] Current Thread [Next in Thread>