Hi,
I’ve built a simple dashboard internally for displaying the current status and most recent completion of each job in the director. It also makes an estimate of the average job size and run frequency (which we use for alerting when jobs
run unexpectedly late or are abnormally sized). I pull this data directly from the catalog with a rather complex sql query.
The one additional feature I’d like to add is an estimate of progress based on current amount backed up, start time, and average size. I think I should be able to calculate this information from the JobMedia table. Indeed this works pretty
well for tape-based backups but produces crazy numbers (many orders of magnitude too big) for disk-based backups.
I’m currently using a simplified query like this to extract the data, and have the dashboard deliberately throw away any numbers that look wrong:
SELECT JobId, SUM(EndBlock-StartBlock)*64512 AS 'CurrentBytes' FROM JobMedia GROUP BY JobId;
I’m obviously making a few assumptions here:
-
A block is always 63k. This seems to hold true for LTO4 tapes. Are blocks for disk-based backups always a fixed size, and is this also 63k?
-
That the start block and end block lie in the same file. Again this holds true for LTO4 backups, but often not for disk backups
Are files fixed numbers of blocks long? Can I make any inference as to how much data has been backed up for a job with a jobmedia record spanning over file numbers? I couldn’t glean any useful answers to these questions from the schema documentation.
Is there a better, or indeed any other way to retrieve this data from the Catalog? I’d prefer to avoid scripting bconsole access and scraping the results of show storage, or show jobid.
For reference, this has been tested against 5.0.2 and 5.2.13, using a MySQL catalog.
Regards,
Ben Roberts