Minimum change to send data to new storage copy pool?

ldmwndletsm

ADSM.ORG Senior Member
Joined
Oct 30, 2019
Messages
232
Reaction score
5
Points
0
Can someone assist me in determining the minimum changes that would be needed for the following scenario. All pools with the exception of disk pools are on tape:

[ Scenario ]
You have copy pool CopyA, you do NOT use collocation, but you do on the primary pool PrimA. You have disk pool diskpoolA, and this is the Destination for copy group copygroupA. Host Alpha is in its own collocation group with no other nodes. If you decide to implement collocation by group wherein node Alpha will also be collocated on the copy pool tapes, but you DON'T want to affect copy pool CopyA (no other nodes will be collocated), but you do want the same copy group parameters for retention (VersExists, VersDeleted, RetExtra, RetOnly) then what is the minimum changes that you would need to make?

I'm thinking the following:

First, I don't see a way to do this without creating another copy pool.

1. Create a new copy storage pool (CopyB), with collocation set to group
2. Create a new disk pool (diskpoolB)
3. Create a new copy group with Destination set to diskpoolB
4. Create a new management class?

It would seem that since the name for the copy group is always STANDARD, it's really a unique combination of the triplet: policy domain, policy set and management class. Clearly, if you populate your copy pool volumes from the data on disk (diskpoolA), as opposed to copying from primary tape, then I don't know a way to force TSM to copy all data, other than node Alpha, to copy pool copyA and node Alpha's data to copy pool copyB. The backup stgpool command is gonna just copy everything. So to segregate it, I would think that you'd need a separate disk pool for node Alpha called diskpoolB. I'm unclear on whether the management class, policy set or policy domain would have to be different, but I would think at least one of them would have to change to keep the triplet unique so as to have another copy group which is needed in order to segregate the data.

[ Related question ]
Initially, maybe you have many TBs of data to back up, so it doesn't take that long to fill up tapes. If data isn't being deleted that frequently then node Alpha doesn't have to hold the other nodes hostage for reclamation on the other copy pool (the one that all the other nodes are using). But further down the road, if the daily backups on node Alpha start to wind down to a small amount then you might end up with a bunch of partially full copy pool tapes going off site unless you wait for them to fill up. And you won't have the benefit of the other nodes to help fill them.

What issues might you run into if you then changed the configuration to now have Alpha send its copy pool data to the same copy pool that all the other nodes are using so you're back to having all the nodes share the same copy pool? Clearly, the backups for the copy pool would be split between two different copy pools. Problem? Hmm ....
 
Back
Top