Old Data not expiring

HardeepSingh

ADSM.ORG Member
Joined
Jul 9, 2015
Messages
17
Reaction score
0
Points
0
Location
India
Hi,

We're facing a lot of Capacity issues recently which led us to investigate further on why we are not deleting data as much as we are adding in.

Case 1:
One of the major concerns we are facing is due to the haphazardness of the setup.
We have multiple Data Domains setup for our environment, each serving the purpose of vaulting and replication for each other. Now, since the early days, TDP for Oracle was pointing to a copy destination directly backing to the NFS share at the DD. Lets just call it DD1.
Now, we also had VTL setup for the same TSM server, but was using the VTL of DD2.
Now DD1 and DD3 are each others replication or DR site
And DD2 and DD4 are each others replication or DR site.

So, when DD1 started getting full, we sought space from DD2 and moved the backup to VTL. Once VTL started getting full after 2-3 months, we moved the destination back to DD1 as it was able to reclaim some space by this time. This activity has been repeated over 4-5 times now. Policy domain remained the same.

With the backup retention advised as 21 days to the RMAN DBAs, they continue expiring the data. But what we've noticed is, the data doesn't completely expire during that 2-3 month window. We still see tapes holding data of random months say, May, August, October of 2013 and Feb, Apr of 2014. Doesn't make sense?
If the copy destination keeps changing, does it affect expiration of data on TSM?

Case 2:
We've changed the policy domain of a number of nodes recently, allowing them to change to a new copy destination. These nodes were causing issues with backup as the local STG pool would fill up to 100% quickly once these started backing up.
Keeping all copygroup parameters the same, we've changed the domain and copy destination of these nodes.
Will this affect expiration & restoration process of the data?
 
Will this affect expiration & restoration process of the data?
Expiration possibly. Nodes that are in a different policy domain will now have the data retention of the management classes in the new domain. If some of these management classes do not exist in the new policy domain, the objects bound to those will now default to the default management class.

Restore won't be affected, unless trying to retrieve older versions and retention is shorter in new domain.

If the copy destination keeps changing, does it affect expiration of data on TSM?
No, copy destination has no effect on expiration, only retention affects when data expires.
 
Tony: It does seem so... but they are not growing exponentially. Consistent rise of a few GB's per month.

Marclant: Thank you.
But it still doesn't get me anywhere. How do I identify the cause of data still sitting in the tapes?
There's over 200 TB's of unaccounted data. When DBA's try to expire the data for the said month, they don't seem to see anything in the catalog.
 
But it still doesn't get me anywhere. How do I identify the cause of data still sitting in the tapes?
If we are talking Oracle data, the DBAs are responsible to delete old backups using RMAN. From a TSM perspective, they will never expire because they are active objects and active objects cannot expire.

If they no longer exist in RMAN, but still exist in TSM, then follow these:
http://www-01.ibm.com/support/docview.wss?uid=swg21380804
http://www-01.ibm.com/support/knowl...m.ibm.itsm.db.orc.doc/r_dporc_cmd_syncdb.html
 
When deleting backups with RMAN and Data Protection for Oracle, there are various ways that the deletions can be performed. For example:
An Oracle script could be utilized with crosscheck using the Oracle DELETE OBSOLETE and DELETE EXPIRED commands.
An RMAN script could be used to issue the command to CHANGE the backuppiece for deletion,
If the RMAN Global parameters for RECOVERY WINDOW or REDUNDANCY are set, then a maintenance channel could be allocated to process the deletions based on these retentions.
When running any of these commands, it is very important to ensure that the same environment variables are used for the deletion as were specified for the backups. Specifically, the TDPO_OPTFILE= setting would need to be the same as what was used during the backup.
 
Hi Hardeep,

Expiration is a different process and change in primary or secondary pool would not affect it.

I understand that you are also concerned about Oracle db backup not expiring properly.

If you are using TSM Server lower than 7.1.3 then you need to ask Oracle DBAs to run TDPOSYNC for every node that is getting backed up.
There is a possibility that they are not performing it for all the nodes.
Once the TDPOSYNC is done without any error, run EXPIRE INVENTORY to clear up the data on TSM

If you are on TSM Server 7.1.3 or above, just run 'deactivate data nodename -todate=<date>' and then expire inventory
 
Back
Top