Networker

Re: [Networker] sizing a Sun server for a 4xLTO3 tape library

2006-02-15 16:35:49
Subject: Re: [Networker] sizing a Sun server for a 4xLTO3 tape library
From: Robert Maiello <robert.maiello AT PFIZER DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Wed, 15 Feb 2006 16:32:10 -0500
Yes,  the x86 X4200 sounds intriguing.  I could not find the backplane
speed of the server anywhere. It appears to have may have 2 PCI-X buses but 
the specs simply say expansion bus (singular), 5 slots. The hardware design
does not seem as well documented as the Sparc servers.

If the built-in gigabits are on a seperate bus and one could place the disk
and tape HBAs across 2 PCI-X buses this may make a good server/storage
node.   If it is a single PCI-X bus, it is unclear that it will be able
to handle 5 HBAs.  

Also one would assume Networker and the OS could make use of the dual cores.

Very interesting.

Robert Maiello
Pioneer Data Systems


On Wed, 15 Feb 2006 10:17:09 -0500, Matthew Huff <mhuff AT OX DOT COM> wrote:

>Since Legato 7.3 is certified for Solaris X86 64-bit for Solaris 10, I
>would imagine a Sun Fire X4200 with 2 x Opteron 275 chips would be an
>ideal server. First, the 4 x AMD cores would be a lot faster than the
>current sparc chipset. Second, it has 4 gigabit ethernet ports built in.
>Third it has 5 PCI-X slots which are a considerable upgrade over Sun's
>PCI slots. Finally, with 4 GB ram and 2x 73GB drives, it lists for
>$6300...
>
>
>
>
>----
>Matthew Huff       | One Manhattanville Rd
>Dir of Operations  | Purchase, NY 10577
>OTA LLC            | Phone: 914-460-4039
>www.otaotr.com     | Fax: 914-460-4139
>
>
>
>
>
>> -----Original Message-----
>> From: Legato NetWorker discussion
>> [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU] On Behalf Of Jeff Mery
>> Sent: Wednesday, February 15, 2006 10:10 AM
>> To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
>> Subject: Re: [Networker] sizing a Sun server for a 4xLTO3 tape library
>>
>> Solaris 10 has a completely new TCP stack that eliminates a
>> single-thread for network communication in Solaris 9.  We're
>> going to 10 as soon as our OS admins are comfortable with it
>> (READ: Very Soon!!).
>>
>> Jeff Mery - MCSE, MCP
>> National Instruments
>>
>> --------------------------------------------------------------
>> -----------
>> "Allow me to extol the virtues of the Net Fairy, and of all
>> the fantastic dorks that make the nice packets go from here
>> to there. Amen."
>> TB - Penny Arcade
>> --------------------------------------------------------------
>> -----------
>>
>>
>>
>> Teresa Biehler <tpbsys AT RIT DOT EDU>
>> Sent by: Legato NetWorker discussion <NETWORKER AT LISTSERV.TEMPLE DOT EDU>
>> 02/15/2006 08:54 AM
>> Please respond to
>> Legato NetWorker discussion <NETWORKER AT LISTSERV.TEMPLE DOT EDU>;
>> Please respond to Teresa Biehler <tpbsys AT RIT DOT EDU>
>>
>>
>> To
>> NETWORKER AT LISTSERV.TEMPLE DOT EDU
>> cc
>>
>> Subject
>> Re: [Networker] sizing a Sun server for a 4xLTO3 tape library
>>
>>
>>
>>
>>
>>
>> Is there any significant difference between Solaris 9 and 10
>> related to their ability to handle multiple NICs?
>>
>> -T
>>
>>
>> -----Original Message-----
>> From: Legato NetWorker discussion
>> [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU]
>> On Behalf Of Robert Maiello
>> Sent: Wednesday, February 15, 2006 9:43 AM
>> To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
>> Subject: Re: [Networker] sizing a Sun server for a 4xLTO3 tape library
>>
>> That is well summed up Vernon, the key concept being 2 LTO3
>> drives (and even 2 LTO2 drives) can "eat" a gigabit NIC all
>> on there own.
>>
>> That said, I'd like to add that looking at PCI buses for the
>> HBAs and/or
>>
>> NICs I'm always hard pressed to pick a particular SUN server
>> up to the task.  Perhaps others can reccommend one?  The
>> ideal server being one where every card is connected to a
>> seperate high speed PCI bus.
>>
>> Also, it has been seen that Solaris 9 or Solaris 10 is needed
>> to get the throughput out of mulitple NICS.
>>
>> Robert Maiello
>> Pioneer Data Systems
>>
>>
>>
>> On Tue, 14 Feb 2006 15:34:25 -0800, Vernon Harris <harriv00 AT YAHOO DOT COM>
>> wrote:
>>
>> >Ty,
>> >Rule of thumb for sizing a sun server to drive 4 x
>> >LTO3 drives would be as follows:
>> >
>> >   For each LTO-3 drive you would need a minimum of approximately
>> >1.25GHZ of processing power.  That would include the
>> processing power
>> >necessary to handle 1 gigabit ethernet nic card.  But to adequately
>> >drive the 4 LTO-3 drives if you backup methodology is lan based
>> >backups, you should consider adding a second nic card and
>> trunking the
>> >2 nic cards together to create a fat network pipe.  Otherwise max
>> >throughput would be limited to approximately 80-90MB/sec,
>> which is the
>> >practical thruput limit of gigabit ethernet. If you add a
>> second nic,
>> >you will need 1.5GHZ of processor power per drive.
>> >
>> >Practically, most servers can never generate enough i/o to
>> keep LTO-3
>> >drives spooling without shoeshining the drives.  The
>> installations that
>> >I've seen with
>> >LTO-3 drives configured attached to solaris servers have not
>> >expererienced performance issues on the servers.
>> >
>> >One important problem that I've seen repeatedly on Sun
>> Servers attached
>> >to the fabric is with Sun Branded qlogic hba's using the leadville
>> >driver stack.  This is manifested with link offline errors in the
>> >/var/adm/messages file which causes the hba to go offline and the
>> >connected drives and libraries to disappear from the fabric.  This
>> >condition can only be resolved by rebooting the server.  Stick with
>> >native emulex or qlogic cards.  Otherwise you are asking for major
>> >problems.
>> >
>> >--- Ty Young <Phillip_Young AT I2 DOT COM> wrote:
>> >
>> >> All,
>> >>
>> >> I apologize in advance if this topic has been covered.  I looked
>> >> through the archive using a variety of search terms without
>> >> successful results.
>> >>
>> >> We have determined that a 4 x LTO3 tape library will work
>> well in our
>> >> environment.    Our Sun SEs, however, claim that
>> >> attempting to drive such a
>> >> library with one host (i.e. where all four LTO3 drives are
>> >> fiber-connected through a switch into the server) is asking for
>> >> trouble and that we really must consider driving it with two, in
>> >> order to split up the gigE network bandwidth requirements
>> as well as
>> >> the FC HBA bandwidth requirements.
>> >> Their argument seems to be based on the theoretical
>> maximum sustained
>> >> I/O that a Sun server backplane can handle, at 1.2 GB/sec.
>> >>
>> >> What I'm not understanding is how one calculates I/O
>> across a server.
>> >> Given that a server takes network traffic (input) and routes it to
>> >> the tape drives (output), is it accurate to basically double the
>> >> aggregate write-rate of a bunch of tape drives (read and
>> >> write) and then double that
>> >> number again to factor in performance with drive compression ?
>> >>
>> >> My head is so full of numbers and stats at the moment that
>> I cannot
>> >> think
>> >> straight and I need some help.   Thanks!
>> >>
>> >> -ty
>> >>
>> >> To sign off this list, send email to
>> >> listserv AT listserv.temple DOT edu and type "signoff networker"
>> in the body
>> >> of the email. Please write to
>> networker-request AT listserv.temple DOT edu
>> >> if you have any problems wit this list. You can access the
>> archives
>> >> at http://listserv.temple.edu/archives/networker.html
>> >> or
>> >> via RSS at
>> >>
>> >http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>> >>
>> >
>> >To sign off this list, send email to listserv AT listserv.temple DOT edu and
>> type "signoff networker" in the
>> >body of the email. Please write to
>> networker-request AT listserv.temple DOT edu
>> if you have any problems
>> >wit this list. You can access the archives at
>> http://listserv.temple.edu/archives/networker.html or
>> >via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>> >=============================================================
>> ==========
>> ==
>>
>> To sign off this list, send email to
>> listserv AT listserv.temple DOT edu and type "signoff networker" in
>> the body of the email. Please write to
>> networker-request AT listserv.temple DOT edu
>> if you have any problems
>> wit this list. You can access the archives at
>> http://listserv.temple.edu/archives/networker.html or via RSS
>> at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>>
>> To sign off this list, send email to
>> listserv AT listserv.temple DOT edu and type "signoff networker" in
>> the body of the email. Please write to
>> networker-request AT listserv.temple DOT edu
>> if you have any problems
>> wit this list. You can access the archives at
>> http://listserv.temple.edu/archives/networker.html or via RSS
>> at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>>
>>
>> To sign off this list, send email to
>> listserv AT listserv.temple DOT edu and type "signoff networker" in
>> the body of the email. Please write to
>> networker-request AT listserv.temple DOT edu if you have any
>> problems wit this list. You can access the archives at
>> http://listserv.temple.edu/archives/networker.html or via RSS
>> at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>>
>
>To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the
>body of the email. Please write to networker-request AT listserv.temple DOT 
>edu 
if you have any problems
>wit this list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
>via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>=========================================================================

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the
body of the email. Please write to networker-request AT listserv.temple DOT edu 
if you have any problems
wit this list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>