Networker

Re: [Networker] Back-to-back 10GbE

2008-05-13 20:10:12
Subject: Re: [Networker] Back-to-back 10GbE
From: David Koch <David.Koch AT NEWCASTLE.EDU DOT AU>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Wed, 14 May 2008 10:05:06 +1000
Hi Kevin,

I'm just south of you in Newcastle at the University of Newcastle.

We have a similar situation where we have a number of datacentres:

(a) our main datacentre is lights-out as far as is possible, and we the IT 
staff are in another building 500 metres away.

(b) we have a mini-datacentre in our office building that contains: back-up 
server, tape library and disk staging and we have single-mode fibre links back 
to the main datacentre

     - we have two 4 Gbps fibre channel ISL links between the Cisco MDS9506 SAN 
fabric switches in the main datacentre and the Brocade 4100B switch in this 
mini-datacentre that has the SAN fabric for the tape library & drives, the 
backup server and the disk-staging array.
     - we currently have multiple GbE IP network connections from the main 
datacentre ntwtork switches to the mini-datacentre

(c) we have another small  datacentre on the other side of campus and have GbE 
IP network links to it

(d) we have another small datacentre at out Ourimbah (NSW Central Coast) campus 
85 kms away and we have two GbE IP links to it, and we have a storage node and 
another tape library there.

We are currently refreshing the backup server from a Sun V890 to a Sun T5220 
with 10GbE link back to the main datacentre, along with upgrading from a Sun 
StorEdge 3511 2 TB disk array to a Sun StoreEdge 2540 6TB disk array.  The tape 
library is a StorageTek/Sun SL500 with 477 slots and four LTO-2 drives and 
three LTO-3 drives (we will shortly add LTO-4 drives, and remove the LTO-2 
drives over the next 12 months).

We have a Sun T2000 storage node with a Sun StorEdge 3511 2TB staging array in 
the main datacentre.  We are about to lift its IP connection from 4x GbE to 1x 
10GbE.

We have a private VLAN between most of the servers in across datacentes, and 
nearly all backup traffic flows over this private VLAN.  The new backup server 
and the stoage node will have 10GbE into this private VLAN.  I think this is a 
neater scenario than having a point-to-point 10GbE link between your backup 
server and your storage node, as this allows any server to access any of the 
back-up/storage-node servers.

We are using the Sun X1027A-Z 10GbE cards or the XAUI 10GbE cards that fit in 
T5220 XAUI slots for our 10GbE server end links.  I looked at the Neterion 
Xframe 10GbE NICs but, at AU $3,500 + each, saw them as too expensive for the 
extra performance they might provide, versus the Sun 10GbE Nics at < AU $2,000 
each. Furthermore, I think their PCI-e 10GbE NIC was yet to be released.

Having the 10GbE links allows you to move savesets on disk staging to the tape 
drives *through* the backup server/storage node that has the library's tape 
drives attached (either by SCSI or FC).  Using NW DDS licensing and having FC 
ISL links between your two datacentres allows the storage node to clone its 
savesets directly to the tape drives (if rthey are SAN fabric attached)  
without requiring the data to pass through the server that is normally directly 
connected to the tape drives.  Which of these two approaches is workable 
depends on:
- your sever load (to support the extra pass-through IP traffic if not using 
DDS), 
- how much money you have available to buy and support the required SAN fabric 
and ISLs, and
- supporting the costs involved in NW DDS licensing

I suspect that a mixed approach is best:
- use FC ISL between your datacentres and zone a tape drive or two onto your 
storage node,
- let the backup server and the storage node access each others tape drives via 
10GbE IP interconnect, and perhaps
- have a small number of DDS licenses to allow some tape drives that are zoned 
to both the main back-up server and the storage node to be shared.

Happy to hear any other views on any of the above,

Cheers, David

>>> Kevin West <kevin.west AT UNE.EDU DOT AU> 14/05/2008 7:29 am >>>
Thanks for the reply Jeff.

The single-mode fiber is fine for what I am want to try.

I am going with Neterion Xframe II network cards.

Any one else want to share information/opinion on back-to-back connection
for cloning/staging, will it or wont it work?

Cheers

Kev


On Tue, 13 May 2008 08:57:36 -0500, Jeff Mery <jeff.mery AT NI DOT COM> wrote:

>I can't see how it wouldn't work as long as the communications are setup
>to use the 10GbE connection.  Systems in B would need to backup to that
>storage node over the "public" and then the storage node would need to be
>sure to clone over the private network between it at the Networker server.
>
>The bigger question would be can the fiber between the "rooms" support
>10GbE over the distance you need to go?  We're in the process of shuffling
>one of our data centers around and came to find out that we had to run
>additional 9um single-mode fiber between our buildings to guarantee 10GbE
>speeds.  All our single-mode was used up and the multi-mode wouldn't cut
>it.
>
>Jeff Mery - MCSE, MCP
>National Instruments
>
>-------------------------------------------------------------------------
>"Allow me to extol the virtues of the Net Fairy, and of all the fantastic
>dorks that make the nice packets go from here to there. Amen."
>TB - Penny Arcade
>-------------------------------------------------------------------------
>
>
>
>From:
>Kevin West <kevin.west AT UNE.EDU DOT AU>
>To:
>NETWORKER AT LISTSERV.TEMPLE DOT EDU 
>Date:
>05/12/2008 04:28 PM
>Subject:
>[Networker] Back-to-back 10Gb
>
>
>
>Hi People
>
>Just a quick question to see if this is possible.
>
>We have two server rooms (lets call them room A and room b).
>
>All the backup infrastructure is in room A (Server, Tape library & D2D
>unit).
>
>The network connection between the to locations at the moment is not the
>best. Which is slowing the backups down as the amount of systems and data
>in
>room B increase.
>
>My plan is to put a storagenode with D2D unit in room B to backup the
>systems locally first. Then Clone/Stage the data to the tape library in
>room A.
>
>There is an unused SC fibre between rooms (no one tell our network people
>of
>my plan).
>
>My question is can I use the SC fibre as a back-to-back connection (direct
>connection between server & storagenode) using 10Gb's network cards and
>can
>the Clone/Staging data be directed to use this only?
>
>Thanks in advance.
>Kev
>
>To sign off this list, send email to listserv AT listserv.temple DOT edu and 
>type
>"signoff networker" in the body of the email. Please write to
>networker-request AT listserv.temple DOT edu if you have any problems with this
>list. You can access the archives at
>http://listserv.temple.edu/archives/networker.html or
>via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER 
>
>
>
>To sign off this list, send email to listserv AT listserv.temple DOT edu and 
>type
"signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
>via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER 
>=========================================================================

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>