Hello,
If you configure lun masking from i2K , you should be able to isolate
which server sees what devices. If the configuration has worked for you
this long, i suspect that drives are indeed having issues, is firmware
autoleveling turned on the i2K ? are they cleaned regularly ? This seems
like hardware issue and better addressed by Quantum.
HTH
soupdragon <networker-forum AT BACKUPCENTRAL DOT COM>
Sent by: EMC NetWorker discussion <NETWORKER AT LISTSERV.TEMPLE DOT EDU>
03/23/2009 06:38 PM
Please respond to
NETWORKER AT LISTSERV.TEMPLE DOT EDU
To
NETWORKER AT LISTSERV.TEMPLE DOT EDU
cc
Subject
SAN attached drives shared between Networker servers
Hi apologies firstly for this lengthy query.
We have recently ugraded our Quantum Scalar i2000 to 8 FC IBM LTO4 drives
and 4Gb IO Blades. We have a Solaris 8 server and an AIX5.3 storage node
(latest Atape drivers) each with 2 dedicated HBAS, both running Networker
7.4.4
We have zoned the 2 HBAS on each server into separate Fabrics and Port 1
of each IO blade into fabric 1 with port 2 in fabric 2. This effectively
means each server OS (inquire) sees each tape drive twice once via each
HBA.
We then use the OS generated tape device path to dedicate 6 drives to the
Solaris server and 2 to the AIX storage node as follows:
FABRIC1
FABRIC2
====== ======
Blade3P1 Blade3P2
----------- -----------
Control SOLARIS
Drive 1 SOLARIS
Drive 2
SOLARIS
Drive 3 SOLARIS
Drive 4 AIX
Blade4P1 Blade4P2
----------
----------
Drive 5
SOLARIS
Drive 6 SOLARIS
Drive 7
SOLARIS
Drive 8 AIX
The thinking behind this was to balance traffic as much as possible
between the HBAs and IO Blades. Also allows reconfiguration without a
reboot in the event of the loss of an HBA / Blade simply by modifying the
device path in jbconfig.
In practice this worked OK for a couple of months. Recently, however, we
have seen random tape error 3 (media failures) on brand new Maxell LTO4
tapes, and the drives have started reporting HW failure 31. In the last
month Quantum have replaced 5 of our 8 drives (all new in December).
I am now starting to question the wisdom of the above configuration. Is it
valid to allow the servers to see all the SAN attached drives and then use
the OS device paths in jbconfig to dictate which paths Networker can use?
Is anyone else running with a similar configuration?
Could the Networker nsrmmd process on one node may be sending signals to
the drives being written by the other, at times interrupting the data
flow?
+----------------------------------------------------------------------
|This was sent by julian.barnett AT standardchartered DOT com via Backup
Central.
|Forward SPAM to abuse AT backupcentral DOT com.
+----------------------------------------------------------------------
To sign off this list, send email to listserv AT listserv.temple DOT edu and
type
"signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
|