ADSM-L

Re: versioning / expiring / multiple backups under same nodename

2002-01-18 10:03:00
Subject: Re: versioning / expiring / multiple backups under same nodename
From: "Warren, Matthew James" <matthewjames.warren AT EDS DOT COM>
Date: Fri, 18 Jan 2002 15:00:24 -0000
Thanks,

but, the mechanics of the failovers etc.. is fine. only 1 machine will be
failed over at any one time.

I'll try and clarify;

M1 and M2 share some common filespace / dirpath names. M3 is failover
machine.

Normal; M1 backs up to tsm under node M1, M2 backs up to TSM under node M2,
M3 backs up to TSM under nodename M3.


if M1 fails over to M3, M3 will now capture M1's files form the shared disk
unde hte nodename M3, M1 backs up, but cannot see the shared disk area, so
TSM marks all the shared disk files under nodename M1 as inactive.

That goes on for a couple of days. Then M1 fails back to M1. M3 backs up,
all M1's shared disk files go inactive under nodename M3, and become active
files again on M1 under nodename M1.

..Then(!) M2 fails over to M2. The above process is repeated, but is
complicated bacause M3 shares filespace names with M1, so, any duplicate
filenames will back up and increase the version count of that file under
nodename M3; but the version count will be too high as it counts versions
from both M1 and M2. This will cause the files to expire earlier than they
would have done from M3 than if they had only ever been backed up under the
original machine nodename.


..Does anyone follow this? :-/

basically (!)

M1, M2 share dirpaths and filenames. The actual data is unique to each
machine and is held on a slice of disk that only that machine has access to.

M3 is a failover. When a machine is failed over to M3, that machines slice
of disk is mounted on M3. The original machine still backs up, but can only
see it's local O/S disk.

M3 runs backups of all the disk it can see each evening, under the nodename
M3.


So, if M1 is failed over, its files are backed up under the nodename M3.

..So far, no problem. If you know what days you were failed over you can
just get the files from the M3 nodename using -pitd / -pitt or -pick

But, M1 fails back to M1, and then M2 fail over to M3.

When M3 backs up, it will see M2's disk and save it under the nodename M3.
PROBLEM! The shared filespace names between M2 and M1 will now cause TSM to
mark files inactive, or back them up creating versions / expirations that
should not be happening.


Arg!

Can anyone see what I'm getting at?