Re: Daily Backup Report
2002-04-16 18:56:34
I had thought I had put my two cents worth in on this subject but when
searching the archives I failed to see my comments so here goes
I agree with Lindsay Morris that simply monitoring the most recent
incremental only gives you half the information you require.
Even if the incremental reports no errors by whatever method you choose
to monitor the success, it does not indicate events such as a entire
filespace has not been backed up or only 1024 bytes was transferred when
normally you expect a much larger amount.
Based on this theory a combination of the success/failure tests need to
be executed to get the whole picture & this is not available via an
'out of the box' command. Itherefore found it necessary to write a
script to obtain the info via SQL.
The script (Pearl on Solaris) issues multiple SQL statements because
without the OUTER join capability I was unable to combine all the info
into a single statement.
The SQL also assumes standardisation on the client schedule names, in
this case all the incremental are INCR_nodename.
The scripts has three major steps;
Step 1
a list of all clients is generated excluding the nodes with 'NOCHECK'
in the CONTACTS fields. The NOCHECK allows me to exclude clients I know
will report Incr backup failures, ie clients that no longer exists but
there still is a need to keep the backups
select substr(domain_name,1,3) as dummy, nodes.node_name, nodes.contact
from nodes where upper(nodes.contact) is NULL or upper(nodes.contact)
not like 'NOCHECK%' and NODETYPE= 'CLIENT'
Step 2
reading each node & perform the following for each......
issue an SQL statement against the SUMMARY table extracting the sum
of the failures, the bytes transferred, TSM idea of success & the
schedule run time based upon the schedule name.
("select summary.schedule_name, sum(summary.failed) as failures,
sum(summary.bytes) amount, summary.successful,
sum(cast((summary.end_time-summary.start_time)minutes as decimal(18,0)))
from summary where summary.schedule_name like 'INCR%' and
summary.entity='$node_name' and
cast((current_timestamp-summary.end_time)days as decimal(18,0)) <1 group
by summary.successful, summary.schedule_name")
issue an SQL statement against the Filespace table to identify the
greatest number of days that any filespace has not been backed up where
the CONTACT field in the NODES table does not contact XFSn (where n is
an FSID for the node). The XFS allows me to exclude filespace from
generating errors when I know the Filespace will not longer be backed
up, ie someone removed a drive from a NT client.
select distinct substr(nodes.domain_name,1,3),
max(cast((current_timestamp-filespaces.backup_end)days as
decimal(18,0))) from filespaces, nodes where
filespaces.node_name='$node_name' and filespace_id not in ($x_fsid))
group by nodes.domain_name
Step 3
generate HTML to display the results highlighting unacceptable
results in an alternate colour (RED) and possible problems in yellow, ie
less than 1K or greater than 5 Gb was transferred.
obviously the could be convert to page, email whatever
An example
HOST NODE SUCCESSFUL BYTES
FAILURES ELAPSED FILESPACE
(Minutes) DAYS
------------------------------------------------------------------------------------------------------------------
HO_TSM NT128ARASHYD YES 1.74 M
HO_TSM NT128ARASHYD YES 1.74 M
1 3 0
HO_TSM NT128DFMS YES 7.02 M
3 6 0
HO_TSM NT128PDB3 YES 882.26 M
0 21 0
HO_TSM NT128PDCADS01 YES 4.36 G
0 155 6
Peter Griffin
Sydney Water
-----------------------------------------------------------
This e-mail is solely for the use of the intended recipient
This e-mail is solely for the use of the intended recipient
and may contain information which is confidential or
privileged. Unauthorised use of its contents is prohibited.
If you have received this e-mail in error, please notify
the sender immediately via e-mail and then delete the
original e-mail.
-----------------------------------------------------------
=========================================================================
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- Re: Daily Backup Report, (continued)
- Re: Daily Backup Report, Andrew Raibeck
- Re: Daily Backup Report, Denis L'Huillier
- Re: Daily Backup Report, Andrew Raibeck
- Re: Daily Backup Report, Rushforth, Tim
- Re: Daily Backup Report, Mark Bertrand
- Re: Daily Backup Report, Joseph Dawes
- Re: Daily Backup Report, Andrew Raibeck
- Re: Daily Backup Report, Jim Healy
- Re: Daily Backup Report, Rushforth, Tim
- Re: Daily Backup Report, Joseph Dawes
- Re: Daily Backup Report,
StorageGroupAdmin StorageGroupAdmin <=
- Re: Daily Backup Report, Mark Bertrand
- Re: Daily Backup Report, Hunley, Ike
- Re: Daily Backup Report, Jim Healy
- Re: Daily Backup Report, Jolliff, Dale
- Re: Daily Backup Report, Mark Bertrand
- Re: Daily Backup Report, Dennis Glover
|
|
|