nv-l

Re: Hot standby configuration for Netview for Unix

1999-08-19 09:12:47
Subject: Re: Hot standby configuration for Netview for Unix
From: "Ken Garst." <KGarst AT GIANTOFMARYLAND DOT COM>
To: nv-l AT lists.tivoli DOT com
Date: Thu, 19 Aug 1999 09:12:47 -0400
Here's some answers that were previously posted:

(Incidentally the issue about IP address takeover and hardware address swapping
is easily solved by setting Tivoli's /etc/wlocalhost file to contain an entry
saying the hostname of  the machine on which NetView is running.  Tivoli's oserv
daemon reads this file for validating it is on the proper server on startup.)




Ken Garst
02/04/99 02:19 PM

To:   Discussion of IBM NetView and POLYCENTER Manager on NetView
      <NV-L AT UCSBVM.ucsb DOT edu>
cc:
Subject:  Re: Hypothetical Question  (Document link not converted)

I am in the middle of testing a two-node cascading HACMP 4.3 cluster under AIX
4.3.2 with Netview 5.1 and Tivoli framework 3.6 as highly available resources.
The other cascading resource group is the Netview database option using Orcacle
as the dbms.

Once the installation of Netview 5.1 and the Framework 3.6 has been done on both
nodes (this is decidedly nontrivial), the failover works fine except for the
following bug I just discovered.

 I have HACMP set up for hardware address takeover as well as IP address
takeover.  The hardware address takeover option uses a fake service adapter MAC
address.  Unfortunately, the Tivoli framework uses the boot adapter's MAC
address as the host node's identifier.  This means that when Netview  fails over
to the alternate node, there is a flood of authen failure traps in the control
desk until the router refreshes its ARP.

Aside from this bug which Tivoli support is addressing, most of the questions
and activities that you listed in your original question are automatically
performed by HACMP.

One advantage of using HACMP rather than running two versions of Netview as Mr.
Shanks mentioned is that less machine resources are dedicated to Netview under
HACMP, i.e. if you run two versions of Netview on two machines, these hosts do
nothing else while under the HACMP cluster, one host is running Netview while
the other is running something else and serves as the failover node.

Incidentally, when using HACMP, the framework and netview filesystems are placed
on twintailed, shared external disk drives (in my case I have using 7135 model
110 RAID scscis).  The Netview shared filesystems I am using are:

     /usr/OV
     /usr/OV/databases
     /usr/ebt
     /Tivoli
     /optivity




IBM's Jim Shanks anwered me back personally about the two NetView servers and I
replied as follows:

Of course you are absolutely correct about the two-host Netview setup
facilitating training and upkeep by operational personnel.  The HACMP cluster
admin is extremely technical and can only be done by a qualified sysadmin.

Incidentally, I have diagnosed the authen failure errors in Netview after a
failover from the primary to the secondary node.  The cause stems from the
Tivoli framework using the IP adapter label as the host ID and associating this
with a hardware MAC address.  In the HACMP setup, I am using IP address takeover
AND hardware address takeover, the latter requiring a "fake" MAC address defined
for the service IP address of the Netview host.  This "fake" MAC is associated
with the service IP address by HACMP but theTivoli framework doesn't recognize
the change.

I have diagnosed two solutions.  The first is to put the host on its service IP
address and then install the framework.  The other is to bring up the host with
Tivoli  on its boot IP and then change the host within Tivoli to the service IP
address.


<Prev in Thread] Current Thread [Next in Thread>