Networker

Re: [Networker] SAN attached drives shared between Networker servers

2009-03-25 16:24:58
Subject: Re: [Networker] SAN attached drives shared between Networker servers
From: Michael Filio <mfilio AT RACKSPACE DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Wed, 25 Mar 2009 15:12:01 -0500
You may wish to verify with Quantum that your LTO4 drive firmware is up to date.

We applied 85V1 today which is suppose to alleviate a lot of those tape alert 3 and 31s.

soupdragon wrote:
Hi apologies firstly for this lengthy query.

We have recently ugraded our Quantum Scalar i2000 to 8 FC IBM LTO4 drives and 
4Gb IO Blades. We have a Solaris 8 server and an AIX5.3 storage node (latest 
Atape drivers) each with 2 dedicated HBAS, both running Networker 7.4.4

We have zoned the 2 HBAS on each server into separate Fabrics and Port 1 of 
each IO blade into fabric 1 with port 2 in fabric 2. This effectively means 
each server OS (inquire) sees each tape drive twice once via each HBA.

We then use the OS generated tape device path to dedicate 6 drives to the Solaris server and 2 to the AIX storage node as follows: &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;FABRIC1&nbsp; &nbsp;FABRIC2&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;======&nbsp; ======
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Blade3P1&nbsp; Blade3P2
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;----------- -----------
Control&nbsp; &nbsp;SOLARIS
Drive 1&nbsp; &nbsp;SOLARIS
Drive 2&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; SOLARIS
Drive 3&nbsp; &nbsp;SOLARIS&nbsp; Drive 4&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; AIX

&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Blade4P1 Blade4P2
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; ----------&nbsp; ----------
Drive 5&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; SOLARIS
Drive 6&nbsp; &nbsp;SOLARIS
Drive 7&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; SOLARIS
Drive 8&nbsp; &nbsp;AIX

The thinking behind this was to balance traffic as much as possible between the 
HBAs and IO Blades. Also allows reconfiguration without a reboot in the event 
of the loss of an HBA / Blade simply by modifying the device path in jbconfig.

In practice this worked OK for a couple of months. Recently, however, we have 
seen random tape error 3 (media failures) on brand new Maxell LTO4 tapes, and 
the drives have started reporting HW failure 31. In the last month Quantum have 
replaced 5 of our 8 drives (all new in December).

I am now starting to question the wisdom of the above configuration. Is it 
valid to allow the servers to see all the SAN attached drives and then use the 
OS device paths in jbconfig to dictate which paths Networker can use? Is anyone 
else running with a similar configuration?

Could the Networker nsrmmd process on one node may be sending signals to the 
drives being written by the other, at times interrupting the data flow?

+----------------------------------------------------------------------
|This was sent by julian.barnett AT standardchartered DOT com via Backup 
Central.
|Forward SPAM to abuse AT backupcentral DOT com.
+----------------------------------------------------------------------

To sign off this list, send email to listserv AT listserv.temple DOT edu and type 
"signoff networker" in the body of the email. Please write to networker-request 
AT listserv.temple DOT edu if you have any problems with this list. You can access the 
archives at http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at abuse AT rackspace DOT com, and delete the original message.
Your cooperation is appreciated.

To sign off this list, send email to listserv AT listserv.temple DOT edu and type 
"signoff networker" in the body of the email. Please write to networker-request 
AT listserv.temple DOT edu if you have any problems with this list. You can access the 
archives at http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER