Networker

Re: [Networker] clustered storage nodes installation

2003-06-27 11:58:43
Subject: Re: [Networker] clustered storage nodes installation
From: "Renty, Bart" <bart.renty AT HP DOT COM>
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Date: Fri, 27 Jun 2003 17:15:18 +0200
You'll indeed have to reinstall Networker on the cluster members,
because in the past they were just a client and now they become a
storage node.

Suppose you have a 2 device jukebox defined on the server as 
        rd=server:\\.\tape1
        rd=server:\\.\tape2
If these devices are now on the san, and connected to BOTH cluster
members, you'll have 4 new devices
        rd=physmember1:\\.\tape1
        rd=physmember1:\\.\tape2
        rd=physmember2:\\.\tape1
        rd=physmember2:\\.\tape2
(! Note that I've seen cases where tape1 on node1 corresponds with tape2
(instead of Tape1 !) on node2 (and vice versa), so they don't match
always ; I suppose this depends on the cables on the switches)

By assigning 2 different hardware ID's to each of the 2*3 devices, you
make DDS aware which device corresponds to eachother.

You can keep the clients as they were before, but you'll have to change
the storage node affinity list, so that they point to the new storage
nodes, eg.

Client Physmember1              Storage node = Physmember1
Client Physmember2              Storage node = Physmember2
Virtual cluster group SQL       Storage Node = Physmember1,Physmember2

You can of course add the other nodes (and nsrserverhost) at the end of
this list

This works great as long as your SQL cluster group is running on
PhysMember1.
As soon as the SQL group is moved to member Physmember2 however, the
backups will still be sent to Physmember1 (supposing this node is not
down), and thus the backups are going over the network, and not to the
local SAN device on Physmember2 !    

I've tried to enter the virtual name as storage node, but that didn't
work, at least not on V6.0.1
We've escalated this problem to Legato , but they told us it was not
supposed to work, and that the only workaround was to edit the storage
node affinity list whenever a failover occurs ...?!
You can create a script which will check on which member each cluster
group is running (using cluster.exe), and changes the storage node order
according to this node (using nwadmin.exe).

It might be possible however that this problem was solved in the latest
version V7.0 or V6.1.3 , but as far as I remember, I didn't notice such
a change in the release notes ?

Bart

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=