ADSM-L

Re: [ADSM-L] Schedule Execution Environment

2009-06-22 19:10:36
Subject: Re: [ADSM-L] Schedule Execution Environment
From: "Clark, Robert A" <Robert.Clark AT PROVIDENCE DOT ORG>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Mon, 22 Jun 2009 16:09:32 -0700
The old approach, was to have a scheduler process per service group.
(The scheduler was set to manual, and the clustering app starts the
scheduler when the service group starts.) Each service group would then
have a unique nodename, and you configure a unique port number on the
client as well.

NodeA <- For the non clustered resources.
Payroll_SQL <- For the first service group.

NodeB <- For the non clustered resources.
HR_SQL <- For the second service group.

More and more, I'm seeing people have one big comamnd script that issues
backups for any databases that are running on the node at the time:

NodeA <- For the non clustered resources.
NodeA_SQL <- Runs script that issues Payroll_SQL or HR_SQL backups,
whichever is active on the node at the time.

NodeB <- For the non clustered resources.
NodeB_SQL <- Runs script that issues Payroll_SQL or HR_SQL backups,
whichever is active on the node at the time.

When a new DB is added, the tsm admin registeres a new node, and both
scripts on both boxes need to be updated to look for the new database.

In our case, this second approach is used with Polyserve. (There is
still some ugliness with the TSM client not knowing how to deal with the
clustered filesystems.)

[RC]

-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of
Steven Harris
Sent: Monday, June 22, 2009 1:11 AM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: [ADSM-L] Schedule Execution Environment

Hi all

I'm setting up backups of a number of MSSQL instances in a three node
cluster.

instance 1 has its opt file in s:\tsm and a sql_full.cmd file there
tailored for it instance 2 has its opt file in t:\tsm and similarly
another sql_full.cmd file tailored for it.
instances 3 and 4 are in u: and v:
Any instance can execute on any machine in the cluster, and I have set
up one scheduler for each instance.

When I run a command schedule, it runs in c:\program
files\tivoli\tsm\baclient by default, although I can give it a fully
qualified path name. The environment of this execution has no clue as to
which scheduler it was run under

I'd like all instances to use the same script_name to execute, and run
from its own share, so as to minimise the number of distinct schedules
that exist (full blown there will be
incremental/daily/weekly/monthly/yearly
schedules to consider, and multiple clusters so simplification is
vital).

How can I pass something into the runtime environment to indicate which
instance the schedule is being run for?  The best I can come up with is
a different user for each scheduler maybe with the home drive mapped to
the same drive as the instance uses.

Thanks

Steve

Steven Harris
TSM Admin
Sydney Australia


DISCLAIMER:
This message is intended for the sole use of the addressee, and may contain 
information that is privileged, confidential and exempt from disclosure under 
applicable law. If you are not the addressee you are hereby notified that you 
may not use, copy, disclose, or distribute to anyone the message or any 
information contained in the message. If you have received this message in 
error, please immediately advise the sender by reply email and delete this 
message.

<Prev in Thread] Current Thread [Next in Thread>