Containers messed up after DB Recovery

mirrorsaw

ADSM.ORG Member
Joined
Mar 7, 2016
Messages
30
Reaction score
0
Points
0
Hi everyone,
Some important node data was accidentally expired off (UPD COPY was run with the wrong parameters, then EXPIRE INV cleared a lot of it away). It was noticed too late to do anything on our primary TSM8.1 server, so we ran a PIT DB RESTORE on our replication server.
This was successful and the node is back, showing the right size in the OCCUPANCY table. So that's encouraging.
But now when we try to replicate this back to the primary server, we're getting a flood of complains that extents can't be found.

ANR4847W REPLICATE NODE detected an extent with ID 9131673671399711896 on
container S:\TSMDATA\Server1\DIRC\24\02\0000000000022498.dcf that is marked
damaged.

When I do a Q CONTAINER for these names, they exist but the state is PENDING on all of them. I can't run an AUDIT CONTAINER on pending containers so I'm a bit stuck. The documentation says that after a PIT RESTORE DB, you may need to audit your containers, but will this find the missing extents, and is my only option to audit ALL containers? I have tens of thousands of them.

Any help would be appreciated, also just FYI the time between backup and restore was only about 14 hours, our reuse delay was always set to 1 so I thought we should be ok?
 
Might be of value to you, as you *MAY* have to set the pending containers back to a non-pending status via db2.
The local fix section shows how to do that.

HOWEVER I have never yet had to do anything as described in that document. I'd get a Sev1 ticket open with IBM depending on the reuse delay value you have in place, have them guide you through the right processes.
 
Back
Top