Networker

Re: [Networker] SV: [Networker] sizing a Sun server for a 4xLTO3 tape library

2006-05-12 13:14:38
Subject: Re: [Networker] SV: [Networker] sizing a Sun server for a 4xLTO3 tape library
From: "Maiello, Robert" <Robert.Maiello AT PFIZER DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Fri, 12 May 2006 13:10:04 -0400
Jim,

 

Very nice design ...a  10 GigE NIC for your data input !!!    

 

A new 4Gbit HBA and 4GBit switch really help now  I suppose :-)     

 

It is interesting..so staying with 2 Gbit HBAs, you would definitely want 
multiple channels to the 

LTO3s..say 2 channels; 2 drives per channel.   Then, if your storage node has a 
10 GigE NIC 

and the 4 drives you would now run against the 4Gbit limit, the tape drive rate 
limit, or PCI bus limit??

..with CPU left?  

 

It would seem from what your saying, that the T2000 can drive 2 Gigabit NICs at 
full speed and have CPU

left.  Not bad for the money.

 

 

Robert Maiello

Pioneer Data Systems

 

 

________________________________

From: Jim Ruskowsky [mailto:jimr AT Jefferies DOT com] 
Sent: Friday, May 12, 2006 12:36 PM
To: Legato NetWorker discussion; Maiello, Robert
Cc: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Subject: Re: [Networker] SV: [Networker] sizing a Sun server for a 4xLTO3 tape 
library

 


Just as an aside from my config (I'm using a single 10gig ethernet) - so far 
the most I've been able to push 
through this little T2000 is 200MB/s sustained (this was going to 3 LTO 3 
drives).  My limiting factor there was 
a single 2gb fibre channel that was being filled to capacity between the T2000 
and the LTO3 drives.  I am in 
the process now of trying different adjustments to see if I can maximize this a 
bit more.  Trying to balance 
server parallelism, number of streams per tape drive, and client parallelism.  
I am thinking of backing 
off the number of drives I expect the networker server to feed down to 2, and 
letting the storage node feed 
4 drives, as the server has a lot more overhead than the storage node.  I'd 
welcome any thoughts on this. 

In out dallas setup, we have a singe gbit coming into the T2000 and I have seen 
sustained rates hovering 
between 80-100MB/s.   

Legato NetWorker discussion <NETWORKER AT LISTSERV.TEMPLE DOT EDU> wrote on 
05/12/2006 12:07:29 PM:

> Ty,
> 
> I found the thread interesting rather than pesky. I apologize if I've
> hijacked the thread in any way.
> 
> So each T2000 right now is using a single Gbit link ??   What I've done,
> without trunking, is utilize 2 Gigabit NICs on the server for incoming
> data.  ie.  I've manually configured half the clients to go to backupNICA
> and half go to backupNICB.   This is via the "server network interface"
> parameter.  When I add a storage node it will be via the storage node name.
> 
> 
> One sees data coming in on both NICs then, and while it may not be as clean
> as trunking it certainly utilizes the NICS and adds speed/bandwidth.  I'd
> be extremely interested in how high the T2000 and Solaris 10 can drive 2
> gigabit NICs.    
> 
> If you have both NICs IP'd on each T2000, perhaps you could try this; 
> manually divide the clients to send data to each of the NICs and let us 
> know if you approach the throughput of 2 NICs per box. It would seem the  
> completion time of a group which utilizes all of your drives and goes over 
> a 200Mbytes/sec rate seems possible; without SUN trunking.
> 
> Robert Maiello
> Pioneer Data Systems
> 
> 
> On Thu, 11 May 2006 09:14:36 -0500, Ty Young <Phillip_Young AT I2 DOT COM> 
> wrote:
> 
> >Great news Jim.  Thanks for the update.
> >
> >Since I'm the one who started this pesky thread, I thought it only
> >appropriate to update the list on our results as well....
> >
> >We now have our new backup system up and in production.  It is comprised of
> >
> >2 x 8-core T2000s (Solaris 10)
> >2 x Sun/Emulex 4Gb dual-port FC HBAs
> >1 x STK L500 library with 4 x HP LTO3 drives (FC)
> >1 x Brocade 4100 switch with 4Gb SFPs
> >1 x (old) HDS 5800-series array (/nsr)
> >
> >I've configured one of the T2000s as the bkup server and the other as a
> >storage node.  In my setup I simply zoned the robotics and drive 0 to the
> >first HBA port and drive 1 to the second HBA port.   I zoned drive 2 to the
> >first HBA on the storage node machine, and drive 3 to the second HBA.
> >Did I need 4Gb HBAs?  no, but the cost difference was minimal and the
> >switch can handle them, so why not.
> >
> >So far, the system is running beautifully.    I don't have a huge # of
> >clients to back up (i.e. < 200) but each of several  groups I have contain
> >about 45 members. By confining my group parallelism to 12 or 18 I typically
> >keep a couple of drives streaming along very nicely at generally about
> >45-80 MB/sec through a single gigabit ethernet connection -- not
> >outstanding, but inline with my throughput expectations for a single gigE
> >link.    (Certainly a lot better than the 5-6 MB/sec I was seeing with the
> >previous drives (DLT8000.))    If I'm not mistaken I think I reduced my
> >backup window by 66% and our operational requirements (i.e. tape monkeys)
> >by 90% by moving to the new media format.
> >
> >One of the big selling ideas for me with the T2000s was the ability to
> >trunk a couple of the onboard gigE NICs together to make a fatter pipe,
> >using Sun Trunking.   According to Sun this is not currently possible, but
> >will be after Solaris 10 Update 3 comes out in late summer 2006.  At that
> >time, the current driver for those NICs (ipge#) will go away and be
> >replaced instead by the g1000e driver (I think I have that right) which is
> >supported in Sun Trunking.
> >
> >-ty
> >
> >Phillip T. ("Ty") Young, DMA
> >Manager, Data Center and Backup/Recovery Services
> >Information Services
> >i2 Technologies, Inc.
> >
> >
> >
> >             Jim Ruskowsky
> >             <[email protected]
> >             OM>                                                        To
> >             Sent by: Legato           NETWORKER AT LISTSERV.TEMPLE DOT EDU
> >             NetWorker                                                  cc
> >             discussion
> >             <NETWORKER@LISTSE                                     Subject
> >             RV.TEMPLE.EDU>            Re: [Networker] SV: [Networker]
> >                                       sizing a Sun server for a 4xLTO3
> >                                       tape library
> >             05/11/2006 08:41
> >             AM
> >
> >
> >             Please respond to
> >             Legato NetWorker
> >                discussion
> >             <NETWORKER@LISTSE
> >              RV.TEMPLE.EDU>;
> >             Please respond to
> >               Jim Ruskowsky
> >             <[email protected]
> >                    OM>
> >
> >
> >
> >
> >
> >
> >Tommy and all -
> >
> >This week has been the week of tweeking and tuning.  I am grateful for all
> >the input and ideas from this group.
> >
> >Tweak #1
> >
> >Initially, my default setup had all the data going through a single 2gb
> >fibre channel to the tape drives - all the clients backing up through a
> >single fibre channel on the same legato server,ignoring the storage node .
> > So I updated my storage node affinity on half my servers.
> >After that fix, we were using a single fibre channel on the main server
> >and a single fc on the storage node.
> >
> >Tweak #2
> >
> >Since legato seems to pick drives sequentially as needs another resouce, I
> >opted to set up my drives such that the odd number drives go through one
> >fibre channel, and the even numbers go through the alternate channel.  I
> >set up a set of symlinks (as suggested here in this group) as
> >/dev/jb/drive## and pointed them to /dev/rmt/##cbn.  I also was able to
> >call my drive## in the sequential order that they are in the tape library
> >and as legato sees them.  Looks so much cleaner now.
> >After that fix, we are finally using both fibre channels on both machines.
> > Total throughput remotely possible both machines combined = 8Gb/sec. I'll
> >be content if I realistically see half that sustained.
> >
> >Overall, everything is working well.  No issues as far as hardware or
> >software compatability.
> >I forgot to set up an iostat to capture my throughput overnight.  I'll get
> >that going for tonight - but the results I will be most interested in
> >myself are for the full backups on the weekend.
> >
> >More updates to come next week.
> >
> >Jim
> >
> >"Tommy Carlsson" <tommy.carlsson AT calvia DOT se> wrote on 05/11/2006 
> >04:59:26
> >AM:
> >
> >> Hi Jim!
> >>
> >> Are you running some test on T2000 now?
> >>
> >> Regards
> >> Tommy
> >>
> >> Fr宺 Legato NetWorker discussion genom Jim Ruskowsky
> >> Skickat: on 2006-04-19 15:14
> >> Till: NETWORKER AT LISTSERV.TEMPLE DOT EDU
> >> Ä­ne: Re: [Networker] sizing a Sun server for a 4xLTO3 tape library
> >>
> >> Tommy, Peter, & list -
> >>
> >> We are very close to getting some real live data through our SunFire
> >T2000
> >> / LTO3 solution - probably in the next couple weeks.
> >>
> >> Our setup is
> >>         Networker 7.2.1 running on Solaris 10
> >>         ADIC i2000 with 10 LTO3 drives (connected to fibre switch via 3
> >> i/o blades @ 2 fibres per blade)
> >>         Two SunFire T2000 - each connected to the fibre switch via 2
> >> fibres
> >>         We have DDS for all 10 drives, so with the fibre redundancy,
> >each
> >> sun server sees each of the 10 drives twice, so networker sees all 10
> >> drives via 4 different paths
> >>         The SunFires each connect to our backup ethernet switch via
> >10Gig
> >> Ethernet.
> >>
> >
> >
> >
> >
> >
> >Jefferies archives and reviews outgoing and incoming e-mail.  It may be
> >produced at the request of regulators or in connection with civil
> >litigation.
> >Jefferies accepts no liability for any errors or omissions arising as a
> >result of  transmission. Use by other than intended recipients is
> >prohibited.
> >
> >To sign off this list, send email to listserv AT listserv.temple DOT edu and 
> >type
> >"signoff networker" in the
> >body of the email. Please write to networker-request AT listserv.temple DOT 
> >edu if
> >you have any problems
> >wit this list. You can access the archives at
> >http://listserv.temple.edu/archives/networker.html or
> >via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
> >
> >To sign off this list, send email to listserv AT listserv.temple DOT edu and 
> type "signoff networker" in the
> >body of the email. Please write to networker-request AT listserv.temple DOT 
> >edu 
> if you have any problems
> >wit this list. You can access the archives at 
> http://listserv.temple.edu/archives/networker.html or
> >via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
> >=========================================================================
> 
> To sign off this list, send email to listserv AT listserv.temple DOT edu 
> and type "signoff networker" in the
> body of the email. Please write to networker-request@listserv.
> temple.edu if you have any problems
> wit this list. You can access the archives at http://listserv.
> temple.edu/archives/networker.html or
> via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
> 




Jefferies archives and reviews outgoing and incoming e-mail. It may be produced 
at the request of regulators or in connection with civil litigation. 
Jefferies accepts no liability for any errors or omissions arising as a result 
of transmission. Use by other than intended recipients is prohibited.


To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the
body of the email. Please write to networker-request AT listserv.temple DOT edu 
if you have any problems
wit this list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER