Networker

Re: [Networker] sizing a Sun server for a 4xLTO3 tape library

2006-03-14 23:38:19
Subject: Re: [Networker] sizing a Sun server for a 4xLTO3 tape library
From: "Ballinger, John M" <john.ballinger AT PNL DOT GOV>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Tue, 14 Mar 2006 20:34:07 -0800
Anyone have this same data for a Windows server ?

JOhn 

-----Original Message-----
From: Legato NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU]
On Behalf Of Robert Maiello
Sent: Wednesday, February 15, 2006 6:43 AM
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Subject: Re: [Networker] sizing a Sun server for a 4xLTO3 tape library

That is well summed up Vernon, the key concept being 2 LTO3 drives (and
even 2 LTO2 drives) can "eat" a gigabit NIC all on there own.

That said, I'd like to add that looking at PCI buses for the HBAs and/or
NICs I'm always hard pressed to pick a particular SUN server up to the
task.  Perhaps others can reccommend one?  The ideal server being one
where every card is connected to a seperate high speed PCI bus.

Also, it has been seen that Solaris 9 or Solaris 10 is needed to get the
throughput out of mulitple NICS.

Robert Maiello
Pioneer Data Systems



On Tue, 14 Feb 2006 15:34:25 -0800, Vernon Harris <harriv00 AT YAHOO DOT COM>
wrote:

>Ty,
>Rule of thumb for sizing a sun server to drive 4 x
>LTO3 drives would be as follows:
>
>   For each LTO-3 drive you would need a minimum of approximately 
>1.25GHZ of processing power.  That would include the processing power 
>necessary to handle 1 gigabit ethernet nic card.  But to adequately 
>drive the 4 LTO-3 drives if you backup methodology is lan based 
>backups, you should consider adding a second nic card and trunking the 
>2 nic cards together to create a fat network pipe.  Otherwise max 
>throughput would be limited to approximately 80-90MB/sec, which is the 
>practical thruput limit of gigabit ethernet. If you add a second nic, 
>you will need 1.5GHZ of processor power per drive.
>
>Practically, most servers can never generate enough i/o to keep LTO-3 
>drives spooling without shoeshining the drives.  The installations that

>I've seen with
>LTO-3 drives configured attached to solaris servers have not 
>expererienced performance issues on the servers.
>
>One important problem that I've seen repeatedly on Sun Servers attached

>to the fabric is with Sun Branded qlogic hba's using the leadville 
>driver stack.  This is manifested with link offline errors in the 
>/var/adm/messages file which causes the hba to go offline and the 
>connected drives and libraries to disappear from the fabric.  This 
>condition can only be resolved by rebooting the server.  Stick with 
>native emulex or qlogic cards.  Otherwise you are asking for major 
>problems.
>
>--- Ty Young <Phillip_Young AT I2 DOT COM> wrote:
>
>> All,
>>
>> I apologize in advance if this topic has been covered.  I looked 
>> through the archive using a variety of search terms without 
>> successful results.
>>
>> We have determined that a 4 x LTO3 tape library will work well in our
>> environment.    Our Sun SEs, however, claim that
>> attempting to drive such a
>> library with one host (i.e. where all four LTO3 drives are 
>> fiber-connected through a switch into the server) is asking for 
>> trouble and that we really must consider driving it with two, in 
>> order to split up the gigE network bandwidth requirements as well as 
>> the FC HBA bandwidth requirements.
>> Their argument seems to be based on the theoretical maximum sustained

>> I/O that a Sun server backplane can handle, at 1.2 GB/sec.
>>
>> What I'm not understanding is how one calculates I/O across a server.
>> Given that a server takes network traffic (input) and routes it to 
>> the tape drives (output), is it accurate to basically double the 
>> aggregate write-rate of a bunch of tape drives (read and
>> write) and then double that
>> number again to factor in performance with drive compression ?
>>
>> My head is so full of numbers and stats at the moment that I cannot 
>> think
>> straight and I need some help.   Thanks!
>>
>> -ty
>>
>> To sign off this list, send email to
>> listserv AT listserv.temple DOT edu and type "signoff networker" in the body

>> of the email. Please write to networker-request AT listserv.temple DOT edu 
>> if you have any problems wit this list. You can access the archives 
>> at http://listserv.temple.edu/archives/networker.html
>> or
>> via RSS at
>>
>http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>>
>
>To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the
>body of the email. Please write to 
>networker-request AT listserv.temple DOT edu
if you have any problems
>wit this list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
>via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
>=======================================================================
>==

To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems wit this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or via RSS at
http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the
body of the email. Please write to networker-request AT listserv.temple DOT edu 
if you have any problems
wit this list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>