TSM schedules and Domino TDP

Aaron S.

ADSM.ORG Member
Joined
Aug 30, 2007
Messages
36
Reaction score
0
Points
0
Hi folks,

I am having an issue with configuring schedules in TSM for Domino TDP nodes on multiple partitions. I used the sample command scripts and edited them to better suit my environment. (Note the samples "as were" did not work in my environment).

To test the command scripts I logged in as the Domino ID correspondent to the partition (for example, DOM_dpar1) and executed the script manually, which is successful. However when I have a schedule call upon the script, it remains in a state of pending (using the q ev * * command) and a session is not even opened on the dsmc -console screen.

Some things I think you should note is I am running dsmcad and have managedservices webclient and schedule in /ba/bin/dsm.sys file. I have also tried the alternate, that is, with hte removal of the schedule option in the dsm.sys and have dsmc sched run in the background.

One major thing I believe is apart of this is in my script I have a variable called DOM_ID, written as DOM_ID=$USER. It is obvious why this works manually when I am logged in as the dpar AIX user, however when TSM calls a script I do not whether it logs in as the corresponding AIX user or as a client scheduler. I've tested various methods and have received no luck. I have a feeling it is something as simple as fixing the login on the script or perhaps I am not running an application I should be running in the background?

Please advise.

Thanks,

Aaron
 
On UNIX the scheduler runs as root so you will want to su in the script to the particular user (for example the Oracle admins su-'d to the Oracle group id) then the DOM_ID variable should work correctly.
 
I utilize the built-in if statement that switches root to the user. But the plot thickens: There are several domino logins, one for each dpar. For example, LPAR1 is divided into DP1 and DP2. DLPAR2 is divided into DP3 and DP4. The script is as follows:

DOM_ID=$USER
export DOM_ID_DIR=/usr/tivoli/tsm/client/domino/bin/domdsmc_${DOM_ID}

date >> ${DOM_ID_DIR}/domsched.log.inc

iam=`whoami`
if [ ${iam} = "root" ]
then
su - ${DOM_ID} "-c /opt/ibm/lotus/bin/domdsmc_${DOM_ID} incremental "'"*"'" /subdir=yes -adsmoptfile=${DOM_ID_DIR}/dsm.opt -logfile=${DOM_ID_DIR}/dominc.log" >>${DOM_I
D_DIR}/domsched.log.inc &
else
/opt/ibm/lotus/bin/domdsmc_${DOM_ID} incremental "*" /subdir=yes -adsmoptfile=${DOM_ID_DIR}/dsm.opt -logfile=${DOM_ID_DIR}/dominc.log >>${DOM_ID_DIR}/domsched.log.inc
&
fi


The idea is to centralize all of the dpars using a single script. Though it's starting to sound like this isn't a possibility. I tried writing an individual script for each and every DPAR but that still didn't seem to help any. For example change the DOM_ID value to DP1. So this way the schedule logs in as root, switches to DP1 and executes the script.

I will test this method later today and see how it turns out. But if this works, I have to define a schedule for each DPAR and point it to the corresponding command. i.e. /usr/tivoli/tsm/client/domino/bin/dominc.dp1.

Thanks for the input Chad - sometimes a scary question has a scary answer :p.
 
Ok - I changed the DOM_ID variable from $USER to DP1. I then defined a new schedule that points to this new command file which I named domarc.dp1. No luck; I performed a 'q ev * *' and it shows that the schedule is in a state of pending.

Any ideas?

Thanks in advance.
 
Hello there;

Did you check the schedule definition? (q sched)... verify if the action of the schedule is "macro" or "command".

The action should be command.... if not, then update it.


Alex
 
Thread Close

Hi folks,

I figured out the issue and am posting this to thank all who helped, as well as to help those who land in a similiar position.

Initially the script was done differently; I changed the variable "DOM_ID=$USER" to a specific Dynamic Partition, though I'm sure this was mentioned previous to this post. With each script tailored to the specific Dynamic Partition administrator, I created a schedule for each correspondingly. This corrected the scripts, and also, in order for the command to execute correctly without having an error in system calls (getattr ioctl... Something along those lines), have the command execute in the background (add an ampersand (&) at the end of the command) and this will bypass the system call error.

The scripts aside, I have a scheduler running in the background however I did not point them to specific dsm.opt files. How to resolve this is to run multiple schedulers, one for each DPAR by issuing: $ nohup dsmc sched -optfile=<path to specific opt file for that dpar> 2>&1 &. After you hit enter, issue Ctrl+D and do a ps to ensure it is running.

After all of these fixes, the schedules are not operating as they should.

Again, thank you to all who helped!

-Aaron.
 
Back
Top