Networker

Re: [Networker] ANALYSIS: Networker server price/performance

2006-02-06 10:49:40
Subject: Re: [Networker] ANALYSIS: Networker server price/performance
From: Robert Maiello <robert.maiello AT PFIZER DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Mon, 6 Feb 2006 10:43:21 -0500
Ed,

Sorry I did not reply sooner ..I did but the send button seemed to
go missing on friday.

I assume your running Solaris 9?   Solaris 8 will be hard bottlenecked
at exactly 1000 Mbps. As noted in my post SUN did through testing for
me and I did also with ttcp..

That said, yes, 3 ce interfaces will be a bit for a V480..SUN reccomends
3 UltraSparc III CPUs per gigabit interface.

But that also being said, you should be getting at least 100MB/sec. My
work was done on a V880 with its PCI buses.  On the V480, I believe it
has 2 66Mhz slots on 1 PCI bus, the rest are on a 33 MHz bus.

In a nutshell, the HBA, and NIC card placement become critical. Also, how
are you distributing your clients across the NICS?   Are you using the 
built-in ce's on the V480?  We found the gigabit card wanting an entire
66Mhz bus to itself with the HBAs on a seperate bus.

Let us know how the cards are layed out...


Robert Maiello
Pioneer Data Systems

On Fri, 3 Feb 2006 10:31:24 -0500, Coty, Edward <Edward.Coty AT AIG DOT COM> 
wrote:

>
>All,
>
>A little help please. I have Sunfire 480 Sun server. This is a 4cpu,8GB
>of memory server. I have 3 ce gigabit copper nic cards. I have 2 emulex
>HBA connected to two IBM LTOII tapes drives each. I do not seem to be
>able to get more than 60MBs a second inbound for all 3 cards. I should
>be able to get 3 times as much or something better than what I am
>seeing. I should see better. I am not able to stream any better than
>80-90mbs total to my drives. With hardware compression I am writing the
>amount of data I am getting through the front end. The thread below was
>very interesting to me. We have the latest kernel and CE patches on our
>Solaris server.
>
>Any and all thoughs welcome on how I can improve performance. Maybe the
>SF480 cannot handle 3 CE cards, two HBA, and 4 IBM LTOII drives.
>
>Ed Coty
>Open Systems Storage Engineering, LCNA
>973-533-2098
>
>
>
>-----Original Message-----
>From: Legato NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU]
>On Behalf Of Robert Maiello
>Sent: Thursday, December 08, 2005 5:55 PM
>To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
>Subject: Re: [Networker] ANALYSIS: Networker server price/performance
>
>Ted,
>
>Yeah,  I was beating my head against the wall.  I have 2 backup NICS on
>the box (same LAN) .  If I sent all my backups to the backupNIC1 by
>setting each clients "server network interface" field  I maxed out the
>gigabit at  100MBytes/sec.  The server is feeding 6 LTO2 drives.
>
>Then if I set half the client's 'server network interface" to
>backupNIC2, I saw traffic coming in on both NICs, BUT, each NIC was
>running at about half the speed..total throughput at the end of the day
>was exactly 1Gbps.
>
>Working with SUN, they have me run ttcp with 5 streams to backup NIC1
>while at the same time doing another ttcp of 5 streams to backupNIC2.
>Sure enough each NIC's throughput was  halfed.  Bottom line is they
>traced this to the stream queue on Solaris 8..it is the bottleneck.
>There's no work around with Solaris8.  They tested Solaris 9 and got
>1400 Mbps with 2
>NICs,  Solaris10 was 1500+ Mbps.
>
>So with the system upgraded to Solaris 9 and the below /etc/system
>parameters from them with 2 NICs I'm getting 145+ MB/sec.  I've
>positioned the cards the best I can with V880 bus design.  I did notice
>with Solaris 9 the load is lower..I would reccomend it for your setup.
>
>In /etc/system the stream queues are increased and the ce ring buffers
>are
>increased:
>
>* Increase stream queues to get rid of nocanputs on ce NICs set
>sq_max_size=30
>* Added to help with rx_ov_flows on ce
>set ce:ce_ring_size=1024
>set ce:ce_comp_ring_size=4096
>
>
>Robert Maiello
>Pioneer Data Systems
>
>On Thu, 8 Dec 2005 13:57:35 -0600, Reed, Ted G II [IT]
><Ted.Reed AT SPRINT DOT COM> wrote:
>
>>Robert,
>>Between this and your other thread, I have a (new) concern that I
>>didn't have 15 minutes ago.....and maybe you have some insight?
>>
>>I am replacing my antiquated e4500 Storage Node (8x400Mhz/4G RAM/4x TOE
>
>>[tcp offload engine] GigE/4x 1G HBA/1x100Mb admin NIC) with a e490
>>(4x1.35Ghz dual core/8G RAM/4x TOE GigE + 1 onboard GigE/2x 2port 2G
>>HBA/1xGigE admin)....both running solaris 8 outputting to 6x STK 9940B
>>(30/60/90M write speed).  I currently max out at 100-120 MB/sec
>>aggregate throughput to the drives, but I also have 6 nsrmmd processes
>>(1 per drive) that max out at 100% cpu (x6) when I hit that mark.  So I
>
>>have always worked under the assumption that my throughput limitations
>>have been due to cpu constraints during max usage.
>>
>>I also assumed that updated servers could reach higher speeds through
>>enhanced CPUs with corresponding nsrmmd processing power.  However,
>>reading your comments on 'streams queue' raises the question....Will we
>
>>see the increased throughput from hardware upgrades alone?  Or should
>>we also be investigating an OS upgrade?  Will Solaris 8 bite me as a
>>solution?  Unlike you, I've never been able to get any kind of good
>>info from our SUN staff or support whenever I have raised the question
>>of true TCP I/O performance tuning.....there's are reasons (other than
>>the
>>obvious) why our e4500 has all TOE cards.
>>
>>Thanks in advance for any thoughts/feelings/comments that you or any
>>other listserv member has.
>>--Ted
>>
>>
>>-----Original Message-----
>>From: Legato NetWorker discussion
>>[mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU]
>>On Behalf Of Robert Maiello
>>Sent: Thursday, December 08, 2005 1:37 PM
>>To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
>>Subject: Re: [Networker] ANALYSIS: Networker server price/performance
>>
>>
>>Jan,
>>
>>Actually its a little known fact the the controller on a V440 is
>>actually a hardware raid controller.  Booting off of it is probably not
>
>>support nor do I know anyone that uses it this way..most SUN systems
>>use Veritas or Sun Volume Manager (aka Disksuire) to mirror the boot
>>disk etc.
>>
>>See my other reply in this thread.  I found the V440 quite capable but
>>apparrently as one tries to drive the gigabit NICs faster more CPU is
>>needed.
>>
>>I'd love to know if any of SUN's new dual core CPUs help in this
>regard.
>>
>>Robert Maiello
>>Pioneer Data Systems
>>
>>On Wed, 7 Dec 2005 03:17:42 -0500, Jan Fredrik
>>L=?ISO-8859-1?Q?=C3=B8vik?= <jan.lovik AT ROXAR DOT COM> wrote:
>>
>>>I am in the process of investing in a new Networker server and was
>>wondering
>>>what kind of disk system you are running on the V440? Since you are
>>>able
>>to
>>>max out 4 procs., are you running sw raid?
>>>I have been given a offer on a V240 and V440 as well as an AMD-based
>>>X4200 server..
>>>
>>>To sign off this list, send email to listserv AT listserv.temple DOT edu and
>>type "signoff networker" in the
>>>body of the email. Please write to
>>>networker-request AT listserv.temple DOT edu
>>if you have any problems
>>>wit this list. You can access the archives at
>>http://listserv.temple.edu/archives/networker.html or
>>>via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>>>======================================================================
>>>=
>>>==
>>
>>To sign off this list, send email to listserv AT listserv.temple DOT edu and
>>type "signoff networker" in the body of the email. Please write to
>>networker-request AT listserv.temple DOT edu if you have any problems wit this
>
>>list. You can access the archives at
>>http://listserv.temple.edu/archives/networker.html or via RSS at
>>http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>>
>>To sign off this list, send email to listserv AT listserv.temple DOT edu and
>type "signoff networker" in the
>>body of the email. Please write to
>>networker-request AT listserv.temple DOT edu
>if you have any problems
>>wit this list. You can access the archives at
>http://listserv.temple.edu/archives/networker.html or
>>via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>>=======================================================================
>>==
>
>To sign off this list, send email to listserv AT listserv.temple DOT edu and
>type "signoff networker" in the body of the email. Please write to
>networker-request AT listserv.temple DOT edu if you have any problems wit this
>list. You can access the archives at
>http://listserv.temple.edu/archives/networker.html or via RSS at
>http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>
>To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the
>body of the email. Please write to networker-request AT listserv.temple DOT 
>edu 
if you have any problems
>wit this list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
>via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>=========================================================================

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the
body of the email. Please write to networker-request AT listserv.temple DOT edu 
if you have any problems
wit this list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER