Veritas-bu

Re: [Veritas-bu] Some info on my experiences with 10GbE

2007-10-18 15:00:22
Subject: Re: [Veritas-bu] Some info on my experiences with 10GbE
From: "Peters, Devon C" <Peters.Devon AT con-way DOT com>
To: "Curtis Preston" <cpreston AT glasshouse DOT com>, <VERITAS-BU AT mailman.eng.auburn DOT edu>
Date: Thu, 18 Oct 2007 11:38:03 -0700
I'd be be glad to share...
 
The OS is sol10 11/06, and I'm running the recommended patch cluster that was available on 9/12 - kernel patch is 125100-10.
 
For tunables, I've tested quite a few different permutations of settings for tcp, but I didn't find a whole lot to be gained from this.  Peformance seemed to be best, as long as I was using a TCP congestion window of 512k or 1024k (sol10 default max is 1024k).  In the end I basically bumped up the max buffer and window sizes to 10MB, enabled window scaling, and bumped up the connection queues:
 
tcp_conn_req_max_q      8192
tcp_conn_req_max_q0     8192
tcp_max_buf             10485760
tcp_cwnd_max            10485760
tcp_recv_hiwat          65536
tcp_xmit_hiwat          65536
 
The tunables that made a noticable difference regarding performance are:
 
ddi_msix_alloc_limit    8
tcp_squeue_wput        1
ip_soft_rings_cnt       64
ip_squeue_fanout        1
nxge0 accept_jumbo      1
 
only one cpu/thread per core is interruptable (set using:  psradm -i 1-3 5-7 9-11 13-15)
 
You can find Sun's recommended settings for these cards here:  http://www.solarisinternals.com/wiki/index.php/Networks
 
 
Also, the iperf commands that have provided the highest throughput are:
 
Server:  iperf -s -f m -w 512K -l 512K
Client:  iperf -c <server> -f m -w 512K -l 512K -t 600 -P <numstreams>
 
 
"Is rss enabled?"  Not sure what you're asking here...
 
 
-devon
 


From: Curtis Preston [mailto:cpreston AT glasshouse DOT com]
Sent: Thursday, October 18, 2007 1:07 AM
To: Peters, Devon C; VERITAS-BU AT mailman.eng.auburn DOT edu
Subject: RE: [Veritas-bu] Some info on my experiences with 10GbE

7500 MB/s!  That’s the most impressive numbers I’ve ever seen by FAR.  I may have to take back my “10 GbE is a Lie!” blog post, and I’d be happy to do so.

 

Can you share things besides the T2000?  For example,

 

what OS and patch levels are you running?

Any IP patches?

Any IP-specific patches?

What ndd settings are you using?

Is rss enabled?

 

“Input, I need Input!”

 

---

W. Curtis Preston

Backup Blog @ www.backupcentral.com

VP Data Protection, GlassHouse Technologies


From: veritas-bu-bounces AT mailman.eng.auburn DOT edu [mailto:veritas-bu-bounces AT mailman.eng.auburn DOT edu] On Behalf Of Peters, Devon C
Sent: Wednesday, October 17, 2007 12:12 PM
To: VERITAS-BU AT mailman.eng.auburn DOT edu
Subject: [Veritas-bu] Some info on my experiences with 10GbE

 

Since I've seen a little bit of talk about 10GbE on here in the past I figured I'd share some of my experiences...

I've recently been testing some of Sun's dual-port 10GbE NICs on some small T2000's (1Ghz, 4-core).  I'm only using a single port on each card, and the servers are currently directly connected to each other (waiting for my network team to get switches and fibre in place).

So far, I've been able to drive throughput between these two systems to about 7500Mbit/sec using iperf.  When the throughput gets this high, all the cores/threads on the receiving T2000 become saturated and TCP retransmits start climbing, but both systems remain quite responsive.  Since these are only 4-core T2000's, I would guess that the 6 or 8-core T2000's (especially with 1.2Ghz or 1.4Ghz processors) should be capable of more throughput, possibly near line speed.

The down side achieving this high of throughput is that it requires lots of data streams.  When transmitting with a single data stream, the most throughput I've gotten is about 1500Mbit/sec.  I only got up to 7500Mbit/s when using 64 data streams…  Also, the biggest gains seem to be in the jump from 1 to 8 data streams;  with 8 streams I was able to get throughput up to 6500Mbit/sec.

Our goal for 10GbE, is to be able to restore data from tape at a speed of at least 2400Mbit/sec (300MB/sec).  We have large daily backups (3-4TB) that we would like to be able to restore (not backup) in a reasonable amount of time.  These restores are used to refresh our test and development environments with current data.  The actual backups are done with array based snapshots (HDS ShadowCopy), which then get mounted and backed up by a dedicated media server (6-core T2000).  We're currently getting about 650MB/sec of throughput with the backups (9 streams on 3 LTO3 tape drives - MPX=3 and it's very compressible data).

Going off my iperf results, the restoring this data using 9 streams should get us well over 2400Mbit/sec.  But - we haven't installed the cards on our media servers yet, so I have yet to see what the actual performanee of netbackup and LTO3 over 10GbE is.  I'm hopeful it'll be close to the iperf results, but if it doesn't meet the goal then we'll be looking at other options.

--
Devon Peters

_______________________________________________
Veritas-bu maillist  -  Veritas-bu AT mailman.eng.auburn DOT edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu