Networker

Re: [Networker] What speed can we expect on a Gb network?

2004-05-05 14:25:14
Subject: Re: [Networker] What speed can we expect on a Gb network?
From: Michael Hurst <mhurst AT NOC.UTORONTO DOT CA>
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Date: Wed, 5 May 2004 14:25:01 -0400
Here is some more info:

We have 2 Antares - dual channel, SCSI-2, Ultra-3, Wide LVD Host adapters in 
the box.
They are in PCI slots, on a shared 64 bit - 33MHz bus.
Each channel goes to a separate drive with only one channel being shared with 
the robot arm.
They should run at 40MB/sec per channel to each drive so in theory we should be 
able to get 160MB/sec total to our 4 drives.
This is then limited by the drives rated at 15MB/sec native or 30MB/sec 
compressed so 60MB/sec to 120MB/sec not 40MB/sec.
We do see good performance on the drives when cloning and staging from local 
disk - about 24-28MB/sec.

We haven't tried Iperf yet as the VLAN configuration on the Sun GigaSwift NIC 
doesn't like tcpdump or other software monitoring.
That is one reason why we had to use port monitoring on the switch.
The NIC is in a PCI slot, on it's own 64 bit - 66MHz bus.
We have tried Legato's "blaster" utility from several other clients with the 
same 40MB/sec limit.

So our initial testing appears to be pointing to network but like mentioned, it 
could be a bus limit as well.
Before we dive into this to deep I was hoping to confirm that we can get over 
40MB/sec on Gb.

Stan:
Are you able to push all your 12 drives using a Gb interface?

Cheers,
Mike

----- Original Message -----
From: "Stan Horwitz" <stan AT TEMPLE DOT EDU>
To: <NETWORKER AT LISTMAIL.TEMPLE DOT EDU>
Sent: Wednesday, May 05, 2004 1:07 PM
Subject: Re: [Networker] What speed can we expect on a Gb network?


On Wed, 5 May 2004, Robert McCarthy wrote:

>One thing I noticed with Gb NIC's is the bus speed of the system matters
>ALOT.. and is it a 64 bit PCI slot or is it a 32 bit slot.  The few Gb NIC
>I have had disclaimers about limited bandwidth using 32 bit slots.  And
>make sure you check your drivers!

Another thing I discovered with Sun hardware is to try to spread the
backup load across as many PCI channels as the server has. For example,
my NetWorker server is a Sun Enterprise 450 and we have it set up so
that two of our NDMP devices sit on one PCI channel and the other two
sit on the system's second PCI channel. This maximizes throughput.
We have a total of twelve tape drives in our library; all connected
via SCSI and we were careful to balance the SCSI connections across the
PCI channels as much as possible.

The only way to do this is to study the schematics for the machine.

Also, a good way to test network connectivity speed is to take a one GB
file and ftp the file from each client to the NetWorker server and note
the transfer times. One source of the bottleneck might be the client(s).

The other thing to test is to use the Unix cp command to copy the same
file directly to each tape drive as in

time cp test_file /dev/rmt/0cbn

and see what the output of time is. Of course, do this only on scratch
tapes.

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=