Networker

[Networker] 10 GbE performance tuning on Solaris client/Storage Nodes

2009-11-03 14:59:12
Subject: [Networker] 10 GbE performance tuning on Solaris client/Storage Nodes
From: Ray Pengelly <pengelly AT QUEENSU DOT CA>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Tue, 3 Nov 2009 14:54:18 -0500
Hey everyone,

I currently have an Sun M5000 with an ixgbe 10 GbE card directly connected to a Sun X4540 system with an nxge 10 GbE.

I am able to read from disk at roughly 930 MB/s using the usam tool:

# time uasm -s ./testfile10g >/dev/null

real    0m11.785s
user    0m0.382s
sys     0m11.396s

If i do this over NFS I am only able to get about 104 MB/sec

# time uasm -s ./testfile10g >/mnt/usam-out

real    1m38.888s
user    0m0.598s
sys     0m44.980s

Using Networker I see roughly the same numbers with the X4540 acting a a Storage Node using adv_file devices on a zpool. I know both the client and server filesystems are not the bottleneck.

Both links show up as 10000 full duplex via dladm show-dev.

Has anyone been through performance tuning 10 GbE on Solaris 10? Any notes/ recipes?

Anyone gotten better throughput than this?

Ray





--
Ray Pengelly
Technical Specialist
Queen's University - IT Services
pengelly AT queensu DOT ca
(613) 533-2034

To sign off this list, send email to listserv AT listserv.temple DOT edu and type 
"signoff networker" in the body of the email. Please write to networker-request 
AT listserv.temple DOT edu if you have any problems with this list. You can access the 
archives at http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER