Hi, yes TSM Manager is very good, but I am still looking for a system that takes it's data source directly from the TSM clients as I feel this is the best place to start backup monitoring.
Myles
If you are set on monitoring from the clients probably the two best options i can think of are ..
Option One... Command Routing.
If all servers are unix use cssh or dsh or an equivalent to execute a script on each server and generate a report with the output. If windows i do not know the equivalent but I know there is tools to do this. I know you can perform these tasks with cygwin, but I hardly think it would be appropriate to install windows on each client.
Option Two... Run Commands locally on client and send the output.
You could schedule a job to interrogate the dsmsched.log on each server and ftp the output to a central server. If the central server was unix it is good because you can set a NULL shell which allows for an "ftp only" account. This is usually enough to allow an accounts password to be included in batch files at companies and passes most audits as there is no way to log into servers with such an account. The upside is that all jobs can run concurrently across a massive environment, the downside is that even after this data is sent to a central server you need more processing on that end to format the file, so you have a lot of scripts out in the environment.
Option Three... Shared Filesystems.
Through setting up windows shares on each client and having the dsmsched in that directory and through utilizing SAMBA you could present all files to a central host.. You could then batch mount each share interrogate the data and unmount the share. About the only plus to this is that it is possible, the downside is that it is a stupid option. Too many areas for it too break, is limited to windows filesystems etc. Obviously I would steer clear of this option.
Option Four... Send schedlog directly to a report server.
Leave the processing to all be done on a central server. Simply setup an ftp (or even better an scp) batch command and schedule through your scheduling tool ie.. cron/windows scheduler/proprietary tool like ContolM. This allows for a single job on the server that gathers all files withing a "create time window"
I would prefer option one, it has a cleanness that i like, it allows concurrent scripting without being too messy as the jobs are all activated from a central host. If things are broken there is a constant first port of call. However if this is not possible for any reason then option 4 would be my next point of call. Again the centralization is a key. Rather than having to write multiple text manipulation scripts, you are writing simple file send scripts. This is considerably less likely to break.
There are options to use postschedule commands as well. I am not sure why the client side information is the information you feel you need to have, but as you can see here are a number of options and they just touch the surface. Some are good options (and secure) others are terrible, but they are options.