Johnwkay72
Active Newcomer
Trying to get accurate accounting of how much a node backs up. I have queried against both the ActLog and Summary tables. The weird thing is that when I do it I get 2 different numbers. The Actlog and the Summary tables appear to be different. Might be +/- 3%
Example: (server names have been changed to protect the innocent)
-- Servers are using a 24 hour cycle (1 time per day) --
-- Select statement filter from summary table for ONLY "BACKUP' activities --
-- Time stamp from ActLog and Summary tables match --
Svr-A: ActLog = 445.89 MB // Summary table = 467376531 bytes.
Convert to Bytes to normalize = 467549552.64 // 467376531
Difference = -173021.64 or -0.04%
This might not seem significant but when you are backing up 10+ TB of incremental data a day that Variance adds up. This effects the TIP/TEP and their data warehouse too.
Does anyone out there have an idea why there is a difference?
Example: (server names have been changed to protect the innocent)
-- Servers are using a 24 hour cycle (1 time per day) --
-- Select statement filter from summary table for ONLY "BACKUP' activities --
-- Time stamp from ActLog and Summary tables match --
Svr-A: ActLog = 445.89 MB // Summary table = 467376531 bytes.
Convert to Bytes to normalize = 467549552.64 // 467376531
Difference = -173021.64 or -0.04%
This might not seem significant but when you are backing up 10+ TB of incremental data a day that Variance adds up. This effects the TIP/TEP and their data warehouse too.
Does anyone out there have an idea why there is a difference?