ADSM-L

Re: TSM - HACMP - EDT Configuration

2003-03-07 09:48:59
Subject: Re: TSM - HACMP - EDT Configuration
From: Daniel Sparrman <daniel.sparrman AT EXIST DOT SE>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Fri, 7 Mar 2003 15:38:01 +0100
Hi

Concering Gresham, you dont need to have multiple clientID for Gresham, 
you can use the same on for all cluster resources. CLIENTID is only when 
communicating with ACSLS.

Storage Agent should be installed locally on each cluster node. The 
Storage Agent is not bound to your cluster resources and their TSM node 
names.

We have both a TSM server running HACMP and EDT, and client nodes running 
HACMP with EDT and the storage agent.

You could see it something like this:

1 Non-cluster resources which should not be integrated into any HACMP 
resource groups: Gresham EDT, TSM Storage Agent

2 Resources which has to be included in your resource groups for correct 
fail-over functionality: TSM client(which really means dsm.opt/dsm.sys 
files and other configuration files needed by each TSM node name).

Your defintion should look something like this:

1. EDT and TSM storage agent locally defined on each cluster node. For 
example, you have LAN-free agent Hnode1 and Hnode2. You have either EDT 
CLIENTID Hnode1and Hnode2, or just run same client ID on both machines, 
for example CLUSTER. EDT doesnt care which node that is connecting to it. 
It only communicates with the Storage Agent and ACSLS. The storage agent 
isn't bound to any node name that connects to it(either it's Hres1, Hres2, 
Hres3 or Hres3_ORA). Therefore, theese applications should be locallly 
installed, and not included in any resource groups.

2. One TSM node for each cluster node, which backs up data not stored in 
cluster resourcegroups(/var, /usr, /etc and so on). For example, Hnode1 
and Hnode2.

3. One TSM node for each cluster resource group, plus a TSM node for the 
TDP for Oracle. For example, Hres1 and Hres2 will be backup/archive nodes, 
and Hres3 will be a TDP for Oracle node. If Hres3 also contains files 
which you want to backup(scripts, output files), then you will have to 
define two nodes for that cluster resourcegroup(Hres3 and Hres3_ORA). The 
configuration files should be located on disks belonging to the cluster 
resource group in question. The binaries should be installed locally on 
each cluster node.

I hope this sums it all up. If not, please dont hesitate so send me any 
questions.

Best Regards

Daniel Sparrman
-----------------------------------
Daniel Sparrman
Exist i Stockholm AB
Propellervägen 6B
183 62 TÄBY
Växel: 08 - 754 98 00
Mobil: 070 - 399 27 51




"Bateman, Fred" <Fred.Bateman AT USDOJ DOT GOV>
Sent by: "ADSM: Dist Stor Manager" <ADSM-L AT VM.MARIST DOT EDU>
2003-03-07 12:30
Please respond to "ADSM: Dist Stor Manager"
 
        To:     ADSM-L AT VM.MARIST DOT EDU
        cc: 
        Subject:        TSM - HACMP - EDT Configuration


I hope I have provided enough detail to describe my problem.
I did assume the reader has fairly detailed knowledge of
HACMP and EDT.

Hardware/Software Environment:
        2 IBM p690s
        AIX 5.1 with ML03
        HACMP 4.4.1
        TSM 5.1.something(TBD) (on another machine)
        TSM Storage Agent
        TDP
        EDT 6.4.1 (Gresham Software)
        SAN
                2 McData 6064 switches
                6 McData ES-1000 switches
                1 STK Silo (9310)
                6 STK 9980 Tape Drives
                1 HDS 9980 (using Sanergy would solve
                                this problem but I would like the
                                option of using tapes Lanfree)
        Various other software

Legend:
        hnodeX == HACMP Node X (same as machine for this discussion)
        hresX  == HACMP Resource X

HACMP Configuration (simplified but hopefully adequate to describe 
problem):
        1 Concurrent Resource (hres3) running Oracle 9.2/RAC.
        1 Cascading Resource (hres1) normally on hnode1.
          Can fail over to hnode2 using IPAT.
        1 Cascading Resource (hres2) normally on hnode2.
          Can fail over to hnode1 using IPAT.

        In summary: each machine can be running one, two or all three
        HACMP resources.

        Each cascading resource has a set of disks
        which moves with the resource and therefore must be backed up
        from either machine.

        The concurrent resource has raw devices
        accessible to both machines which do not "move". Using TDP, they
        can be backed up from either machine.

        Each hnodeX has data (e.g., rootvg and JFS type data belonging to
        the concurrent resource) which stays with the machine
        and must be backed up.

Problem:
        How do I define TSM/EDT? (We currently use EDT to allow us to
        do Lanfree so we know the basics.)

        I need to be able to move hres1 and hres2 between hnodes
        without worrying about tapes being dismounted. We currently
        set CLIENTID to hostname. I do not believe that would work
        in this environment because
                1) hres1 has tape(s) mounted using LANFREE
                2) hres3 has tape(s) mounted using LANFREE
                3) hres1 fails over from hnode1 to hnode2 for some reason
                4) the tape(s) being used by hres1 would not be dismounted

        I then thought of using hresX as the CLIENTID. I thought of 
separate
        TSM Storage Agents for each hresX (possible?). I thought of 
separate
        stgpools for each hresX pointing to separate
        libraries so I could use hresX as the EDT CLIENTID (not very
        appealing! but I think it would work).

        Any help in this matter would be GREATLY APPRECIATED!

<Prev in Thread] Current Thread [Next in Thread>