Networker

Re: [Networker] 10 GbE performance tuning on Solaris client/Storage Nodes

2009-11-04 11:07:48
Subject: Re: [Networker] 10 GbE performance tuning on Solaris client/Storage Nodes
From: Terry Lemons <lemons_terry AT EMC DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Wed, 4 Nov 2009 11:01:33 -0500
As ever, when you think 'performance' with NetWorker, consider use of
bigasm.  See the 'NetWorker Performance and Tuning Guide' for more
information.

tl

-----Original Message-----
From: EMC NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU] On
Behalf Of Reed, Ted G [IT]
Sent: Tuesday, November 03, 2009 3:58 PM
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Subject: Re: [Networker] 10 GbE performance tuning on Solaris
client/Storage Nodes

I don't believe your 930M rate is realistic, as it is going to
/dev/null, not local disk.  Since null pitches the data w/o looking at
it or hitting disk, it is not an apples to apples of an over-NFS to
disk.  Perhaps recover to /tmp (if not ram based space)....but you
should compare a disk landing zone to a disk landing zone.

Having said that, much like 1Gbe, I have only seen 40-60% of max rated
speed in untuned interfaces.  Sun performance tuning white paper (on
their public website...sorry not at computer to get link) will provide
some tcp stack, buffer, and other recommended os level optimizations.
-----Original Message-----

From:  "Ray Pengelly" <pengelly AT QUEENSU DOT CA>
Subj:  [Networker] 10 GbE performance tuning on Solaris client/Storage
Nodes
Date:  Tue Nov 3, 2009 14:00
Size:  1K
To:  "NETWORKER AT LISTSERV.TEMPLE DOT EDU" <NETWORKER AT LISTSERV.TEMPLE DOT 
EDU>

Hey everyone,

I currently have an Sun M5000 with an ixgbe 10 GbE card directly
connected to a Sun X4540 system with an nxge 10 GbE.

I am able to read from disk at roughly 930 MB/s using the usam tool:

# time uasm -s ./testfile10g >/dev/null

real    0m11.785s
user    0m0.382s
sys     0m11.396s

If i do this over NFS I am only able to get about 104 MB/sec

# time uasm -s ./testfile10g >/mnt/usam-out

real    1m38.888s
user    0m0.598s
sys     0m44.980s

Using Networker I see roughly the same numbers with the X4540 acting a a
Storage Node using adv_file devices on a zpool. I know both the client
and server filesystems are not the bottleneck.

Both links show up as 10000 full duplex via dladm show-dev.

Has anyone been through performance tuning 10 GbE on Solaris 10? Any
notes/ recipes?

Anyone gotten better throughput than this?

Ray





--
Ray Pengelly
Technical Specialist
Queen's University - IT Services
pengelly AT queensu DOT ca
(613) 533-2034

To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER



This e-mail may contain Sprint Nextel Company proprietary information
intended for the sole use of the recipient(s). Any use by others is
prohibited. If you are not the intended recipient, please contact the
sender and delete all copies of the message.

To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER