Networker

Re: [Networker] Compare Netoworker to other backup products

2005-10-31 09:11:28
Subject: Re: [Networker] Compare Netoworker to other backup products
From: Matthew Huff <mhuff AT OX DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Mon, 31 Oct 2005 08:59:59 -0500
 

MH> This is a pet peeve of mine. 100MB auto-negotiation isn't a
"protocol"
MH> like PPP. In PPP, both sides negotiate capabilities. 100MB 
MH> auto-negotiation isn't like that at all. What happens is when the 
MH> link becomes active, both sides "listen in" to the traffic and make 
MH> an educated guess as to what the other side is speaking. One of the 
MH> problems with this is that Cisco and other switches have 
MH> spanning-tree protocols that block outgoing traffic until the 
MH> spanning-tree BDU fails to show up on another port.

> Spanning tree is a Good Thing[tm]. It prevents clueless users from
connecting a hub or switch that isn't spanning-tree aware > 
> and thus create a lopp, which in turn can create network downtime.
Ofcouse, junk switches from D-link have a problem handling > this
anyway, but that's another story. 
> I haven't seen this problem ever, so I guess you're wrong, although I
can't prove it. ;) Lets just say that its VERY uncommon.

Of course Spanning-tree is a good thing, but even with spanning-tree
portfast on, it will still complete the spanning tree and make sure
there are no loops, however if there is a loop with portfast on then it
will cause a very brief problem. However with spanning-tree BPDUguard
turned on that elimanates that problem as well.


> No, spanning tree, trunk auto negotiation mode, PoE negotiation etc
does not block the negotiation in any way. And no, auto > > negotiation
is not about listening for traffic, both parties present a list of which
mode they can operate in.

There is NO negotation. Read the Cisco docs and RFCs. Only with gigabit
is there an actual negotiation of protocols. When "auto" is set at
100MB, it's a educated guess, not a protocol or negotation. Etherchannel
and trunking can interfere with the auto negotation, read the Cisco
"Best Practices" to understand why the recommend "set port host" when
doing "auto".

MH> Why take the risk? 


> In Solaris SPARC, on Sun V440 servers with the Cassini chip, I've
noticed that TCP tuning isn't the biggest performance > 
> problem. Instead, the driver or the NIC seems to eat up all of the CPU
when there is high network throughput, probably to TCP > checksum
calculations, but that hasn't been confirmed. My suggestion is to run
the storage nodes on Linux x64 servers from, > 
> for instance, HP. That way you eliminate both Slowlaris and Sun V40z
servers, which both have their own set of quality
> problems. Regarding the performance of the Cassini NIC, that has been
covered in detail in a previous thread.

Actually, it's more the CPU processor interupts handling the number of
packets per second that's the problem. Switching to jumbo frames
resolved that nicely as well as increasing the TCP window size allowing
for larger streams. Let's not get into a OS religious war, but I've been
working with Linux and Solaris for 16 years, and I'd never run a
production backup server on Linux, but that's just me. I wonder if your
reluctance to hard code the ethernel port settings is based on the
incredible variance in how to do this in a Linux environment and how
poorly documented it is. In many cases you have to read the driver code
to figure out which hex value to give to the driver.

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the
body of the email. Please write to networker-request AT listserv.temple DOT edu 
if you have any problems
wit this list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER