Networker

Re: [Networker] Parallelism???

2005-06-26 09:53:59
Subject: Re: [Networker] Parallelism???
From: Robert Maiello <robert.maiello AT PFIZER DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Sun, 26 Jun 2005 09:44:09 -0400
What does the interrupts=1; do???  I've seen it on the web but can't
find what it does documented everywhere.

I've added more ring buffers but did not see much improvement. I have not
played with bcopy or dvma..   It seems I have 2 ce interfaces using 8
CPUs but only driving each ce to half its bandwidth.

Your tcp_recv_hiwat is large..I have not though of raising it this high.

My understanding is each ce is using 4 taskqs/worker threads.. I see packets
spread across CPUs..I see no reason to disable this or how not spreading
them out would help..

Thanks for posting your setting.

Robert Maiello
Pioneer Data Systems

On Sat, 25 Jun 2005 11:08:18 +0200, Oscar Olsson <spam1 AT QBRANCH DOT SE> 
wrote:

>On Fri, 24 Jun 2005, Robert Maiello wrote:
>
>RM> Yes, it seems the Cassini interface can do 900+ Mbits/sec but that Sun
>RM> recommends 3to4 UltraSparc III's per NIC.  We can't use Jumbo frames
either.
>
>OK, do you know of any recommendations from Sun for the bge
>(Broadcom) interfaces? It would be interesting to know how they compare in
>regard to CPU useage for the same type of network I/O.
>
>RM> I heard Solaris 9 and, of course Solaris 10 make some strides in
Network
>RM> performance.  Still, I'd love to hear if anyone is maxing out 2 or more
>RM> and what Sun hardware they use to do it??   Do the UltraSparc IV's help
>RM> in any way here?? ie...2 ce worker threads per CPU ??   Of course PCI
>RM> bus bandwidth becomes an issue as well...
>
>We're running solaris 9, with 4 UltraSPARC IIIi CPUs with 1MB cache, at
>1281MHz, on a Sun V440. PCI bandwidth shouldn't be a problem, since
>they're on two different buses. There is just no way we can max out both
>NICs on this hardware, at least not without using jumbo frames, and I
>doubt that will add more than 10-20% extra performance. We rather need to
>double it. :)
>
>Anyway, yesterday I decided to see if I can't tune kernel and driver
>parameters in order to increase performance. I used a few documents I
>found on google, and compared them (they tend to conflict somewhat), and
>then I came up with a few settings that seem better than the previous
>defaults. I'm pretty sure they're not optimal, but I have seen a 10%
>performance increase on our NetWorker server. The performance gained is
>10% higher backup throughput. This is clearly visible on the MRTG graph,
>that is drawn by using load data from the port-channel interface on the
>switch (i.e. both ce0 and ce1 aggregated, since we're running Sun Trunking
>1.3).
>
>I was thinking of applying "set ce:ce_taskq_disable = 1", but some
>documents suggested that this might be a bad idea. Does anyone know why or
>if it should be enabled/disabled and under what circumstances?
>
>This is what I applied (no warranty that this will work for you. Who
>knows, it might even mess up your system instead? :) ):
>
>---
>
>bash-2.05# pwd
>/etc/rc2.d
>bash-2.05# more S94local
>#!/sbin/sh
># Local commands run before networker
>/usr/sbin/ndd -set /dev/tcp tcp_max_buf 4194304
>/usr/sbin/ndd -set /dev/tcp tcp_recv_hiwat 196608
>/usr/sbin/ndd -set /dev/tcp tcp_xmit_hiwat 65536
>/usr/sbin/ndd -set /dev/tcp tcp_maxpsz_multiplier 10
>
>/usr/sbin/ndd -set /dev/ce instance 0
>/usr/sbin/ndd -set /dev/ce rx_intr_pkts 48
>/usr/sbin/ndd -set /dev/ce rx_intr_time 30
>/usr/sbin/ndd -set /dev/ce instance 1
>/usr/sbin/ndd -set /dev/ce rx_intr_pkts 48
>/usr/sbin/ndd -set /dev/ce rx_intr_time 30
>
>--
>
>In /etc/system:
>
>set maxusers=2048
>set maxphys=2097152
>set sq_max_size=1600
>
>set rlim_fd_max=8192
>set rlim_fd_cur=8192
>
>set pt_cnt=1024
>set rlim_fd_max=8192
>set rlim_fd_cur=8192
>set tcp:tcp_conn_hash_size=32768
>set ce:ce_bcopy_thresh=512
>set ce:ce_dvma_thresh=512
>set ce:ce_taskq_disable=1
>set ce:ce_ring_size=512
>set ce:ce_comp_ring_size=2048
>set ce:ce_tx_ring_size=4096
>
>[NOTE: Although the number of open files doesn't have anything to do with
>performance, I tend to change it on my systems anyway]
>
>---
>
>bash-2.05# more /kernel/drv/ce.conf
>interrupts=1;
>
>---
>
>//Oscar
>
>--
>Note: To sign off this list, send a "signoff networker" command via email
>to listserv AT listserv.temple DOT edu or visit the list's Web site at
>http://listserv.temple.edu/archives/networker.html where you can
>also view and post messages to the list. Questions regarding this list
>should be sent to stan AT temple DOT edu
>=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listserv.temple DOT edu or visit the list's Web site at
http://listserv.temple.edu/archives/networker.html where you can
also view and post messages to the list. Questions regarding this list
should be sent to stan AT temple DOT edu
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

<Prev in Thread] Current Thread [Next in Thread>