ADSM-L

Re: versioning / expiring / multiple backups under same nodename

2002-01-18 10:15:15
Subject: Re: versioning / expiring / multiple backups under same nodename
From: Daniel Sparrman <daniel.sparrman AT EXIST DOT SE>
Date: Fri, 18 Jan 2002 16:07:00 +0100
Hi

The description you gave tells me there is something wrong with your
configuration.

Normally when you set up TSM to handle clustering, you have one TSM
nodename for each clusternode(M1,M2,M3). These nodenames are only for
backing up local files on the node. Then you have either one nodename for
each clusterresource, or one nodename for all clusterresources. You also
have to bind the nodename to the clusterresource, so that the TSM service
that handles the cluster nodename, moves with the clusterresource.

This way, when the resource moves from one node to another, the TSM
nodename will follow.

There's some good books about this on Tivolis website.

Best Regards

Daniel Sparrman
-----------------------------------
Daniel Sparrman
Daniel Sparrman
Exist i Stockholm AB
Bergkällavägen 31D
192 79 SOLLENTUNA
Växel: 08 - 754 98 00
Mobil: 070 - 399 27 51


                                                                                
                 
                    "Warren, Matthew                                            
                 
                    James"                     To:     ADSM-L AT VM.MARIST DOT 
EDU                      
                    <matthewjames.warre        cc:                              
                 
                    n AT EDS DOT COM>                 Subject:     Re: 
versioning / expiring / multiple 
                    Sent by: "ADSM:            backups under same nodename      
                 
                    Dist Stor Manager"                                          
                 
                    <[email protected]                                         
                 
                    DU>                                                         
                 
                                                                                
                 
                                                                                
                 
                    2002-01-18 16:00                                            
                 
                    Please respond to                                           
                 
                    "ADSM: Dist Stor                                            
                 
                    Manager"                                                    
                 
                                                                                
                 
                                                                                
                 




Thanks,

but, the mechanics of the failovers etc.. is fine. only 1 machine will be
failed over at any one time.

I'll try and clarify;

M1 and M2 share some common filespace / dirpath names. M3 is failover
machine.

Normal; M1 backs up to tsm under node M1, M2 backs up to TSM under node M2,
M3 backs up to TSM under nodename M3.


if M1 fails over to M3, M3 will now capture M1's files form the shared disk
unde hte nodename M3, M1 backs up, but cannot see the shared disk area, so
TSM marks all the shared disk files under nodename M1 as inactive.

That goes on for a couple of days. Then M1 fails back to M1. M3 backs up,
all M1's shared disk files go inactive under nodename M3, and become active
files again on M1 under nodename M1.

..Then(!) M2 fails over to M2. The above process is repeated, but is
complicated bacause M3 shares filespace names with M1, so, any duplicate
filenames will back up and increase the version count of that file under
nodename M3; but the version count will be too high as it counts versions
from both M1 and M2. This will cause the files to expire earlier than they
would have done from M3 than if they had only ever been backed up under the
original machine nodename.


..Does anyone follow this? :-/

basically (!)

M1, M2 share dirpaths and filenames. The actual data is unique to each
machine and is held on a slice of disk that only that machine has access
to.

M3 is a failover. When a machine is failed over to M3, that machines slice
of disk is mounted on M3. The original machine still backs up, but can only
see it's local O/S disk.

M3 runs backups of all the disk it can see each evening, under the nodename
M3.


So, if M1 is failed over, its files are backed up under the nodename M3.

..So far, no problem. If you know what days you were failed over you can
just get the files from the M3 nodename using -pitd / -pitt or -pick

But, M1 fails back to M1, and then M2 fail over to M3.

When M3 backs up, it will see M2's disk and save it under the nodename M3.
PROBLEM! The shared filespace names between M2 and M1 will now cause TSM to
mark files inactive, or back them up creating versions / expirations that
should not be happening.


Arg!

Can anyone see what I'm getting at?