Networker

Re: [Networker] SAN attached drives shared between Networker servers

2009-03-24 08:56:42
Subject: Re: [Networker] SAN attached drives shared between Networker servers
From: Fazil Saiyed <Fazil.Saiyed AT ANIXTER DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Tue, 24 Mar 2009 07:52:16 -0500
Hello,
If you configure lun masking from i2K , you should be able to isolate 
which server sees what devices. If the configuration has worked for you 
this long, i suspect that drives are indeed having issues, is firmware 
autoleveling turned on the i2K ? are they cleaned regularly ? This seems 
like hardware issue and  better addressed by Quantum.
HTH



soupdragon <networker-forum AT BACKUPCENTRAL DOT COM> 
Sent by: EMC NetWorker discussion <NETWORKER AT LISTSERV.TEMPLE DOT EDU>
03/23/2009 06:38 PM
Please respond to
NETWORKER AT LISTSERV.TEMPLE DOT EDU


To
NETWORKER AT LISTSERV.TEMPLE DOT EDU
cc

Subject
SAN attached drives shared between Networker servers






Hi apologies firstly for this lengthy query.

We have recently ugraded our Quantum Scalar i2000 to 8 FC IBM LTO4 drives 
and 4Gb IO Blades. We have a Solaris 8 server and an AIX5.3 storage node 
(latest Atape drivers) each with 2 dedicated HBAS, both running Networker 
7.4.4

We have zoned the 2 HBAS on each server into separate Fabrics and Port 1 
of each IO blade into fabric 1 with port 2 in fabric 2. This effectively 
means each server OS (inquire) sees each tape drive twice once via each 
HBA.

We then use the OS generated tape device path to dedicate 6 drives to the 
Solaris server and 2 to the AIX storage node as follows: 

&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;FABRIC1&nbsp; 
&nbsp;FABRIC2&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;======&nbsp; ======
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Blade3P1&nbsp; Blade3P2
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;----------- -----------
Control&nbsp; &nbsp;SOLARIS
Drive 1&nbsp; &nbsp;SOLARIS
Drive 2&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 
SOLARIS
Drive 3&nbsp; &nbsp;SOLARIS&nbsp; 
Drive 4&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; AIX

&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Blade4P1 Blade4P2
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; ----------&nbsp; 
----------
Drive 5&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 
SOLARIS
Drive 6&nbsp; &nbsp;SOLARIS
Drive 7&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 
SOLARIS
Drive 8&nbsp; &nbsp;AIX

The thinking behind this was to balance traffic as much as possible 
between the HBAs and IO Blades. Also allows reconfiguration without a 
reboot in the event of the loss of an HBA / Blade simply by modifying the 
device path in jbconfig.

In practice this worked OK for a couple of months. Recently, however, we 
have seen random tape error 3 (media failures) on brand new Maxell LTO4 
tapes, and the drives have started reporting HW failure 31. In the last 
month Quantum have replaced 5 of our 8 drives (all new in December).

I am now starting to question the wisdom of the above configuration. Is it 
valid to allow the servers to see all the SAN attached drives and then use 
the OS device paths in jbconfig to dictate which paths Networker can use? 
Is anyone else running with a similar configuration?

Could the Networker nsrmmd process on one node may be sending signals to 
the drives being written by the other, at times interrupting the data 
flow?

+----------------------------------------------------------------------
|This was sent by julian.barnett AT standardchartered DOT com via Backup 
Central.
|Forward SPAM to abuse AT backupcentral DOT com.
+----------------------------------------------------------------------

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type 
"signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER


To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER