ldmwndletsm
ADSM.ORG Senior Member
- Joined
- Oct 30, 2019
- Messages
- 232
- Reaction score
- 5
- Points
- 0
Okay, the answer to this question is most likely a resounding "NO". After hunting around for a while on this forum and elsewhere, it appears that there is no support in this product for this, but in case I've overlooked something, or there might could be some ingenious workaround, let me state the background and problem.
[ BACKGROUND ]
We have a storage pool wherein the data must be restorable for perpetuity. Therefore, we do not use reclamation on this pool. Even if we set a very low threshold, it would never occur because data is seldom deleted from the affected file spaces on the client, certainly never enough to even satisfy a 1% reclamation threshold. We have 'No Limit' set on the copy group parameters. We have a copy pool for off-site storage. This data comprises a single node, the sole member of its own collocation group. No, that probably wasn't necessary since it's the only node writing to that storage pool, but for consistency, we created a collocation group just for it. Data is first written to disk pool then copied ("backup stgpool") to copy pool tapes, migrated (nextstgpool; lowmig/highmig) from disk to primary pool, and later another "backup stgpool" command is run to copy any data on primary pool tape to copy pool tape that might not have made it there before it was deleted from disk.
[ PROBLEM ]
The problem that we've run into is that it's taking longer and longer for the tapes to become FULL than it used to. Obviously, the primary pool tapes are not a concern as they have forever to fill. Yes, we can use "move data" to consolidate the copy pool volumes to the one filling tape with the least space (e.g. forcing this by marking the others readonly). But this still will not push the remaining copy pool tape close enough to 100% usage. So we have a choice between waiting for it to eventually fill up or taking a low usage tape off site. This is okay from time to time but it will eventually take its toll on our tape supply if we do this ad infinitum. And we cannot simply return it from off site later to append to it since that places us at risk.
With other storage pools, wherein reclamation is used, this is not a concern since on a weekly basis, we get back a fair number of the off-site tapes for reclaims, never mind the ones that never come back, but this is not possible with the aforementioned pool.
[ FANTASY SOLUTION ]
If would be nice if we could simply make a copy of a specific copy pool tape. Then that could be taken off site as a surrogate, and we then repeat that each week until the original tape is full, and then it goes off site, and all the temporary surrogates would then be brought back and recycled. We used to do this with another backup product that supported this since it allowed more granular control on a per tape basis -- very flexible and worked like a champ. We never ate through too many of these temporary copy tapes before the original filled up, so no big deal in terms of temporary tape consumption, and that allowed us to space out the deposits of the original copy tapes or minimally allow them to reach higher usage while still offering proper protection.
1. A backupset could be created, I suppose, but it doesn't look like it allows you to specify the input volumes, and it would capture all the data in the named file spaces, right ? I checked, and there are a number of file spaces on the current filling copy pool tape with the least remaining space. Having to run a full of all of those file spaces would take a long time. 2. Creating a second copy storage pool, and running a second "backup stgpool" command would end up trying to copy all the data in the primary pool all over again to the second copy pool as none of it would exist there yet.
We really need to be able to just make a copy of the incremental from any one tape.
[ ACTUAL SOLUTION ]
Would it be possible to modify our modus operandi to automate the process of creating a second copy pool volume for each "original" copy pool volume ? -- BUT without having to re-copy all the data in the primary pool as that would take a year !
Perhaps, we should have set this up from the beginning, but now it's too late ? Okay then, what would be the closest thing in TSM that we could do now to mitigate this problem going forward ? I guess that's the real question.
[ BACKGROUND ]
We have a storage pool wherein the data must be restorable for perpetuity. Therefore, we do not use reclamation on this pool. Even if we set a very low threshold, it would never occur because data is seldom deleted from the affected file spaces on the client, certainly never enough to even satisfy a 1% reclamation threshold. We have 'No Limit' set on the copy group parameters. We have a copy pool for off-site storage. This data comprises a single node, the sole member of its own collocation group. No, that probably wasn't necessary since it's the only node writing to that storage pool, but for consistency, we created a collocation group just for it. Data is first written to disk pool then copied ("backup stgpool") to copy pool tapes, migrated (nextstgpool; lowmig/highmig) from disk to primary pool, and later another "backup stgpool" command is run to copy any data on primary pool tape to copy pool tape that might not have made it there before it was deleted from disk.
[ PROBLEM ]
The problem that we've run into is that it's taking longer and longer for the tapes to become FULL than it used to. Obviously, the primary pool tapes are not a concern as they have forever to fill. Yes, we can use "move data" to consolidate the copy pool volumes to the one filling tape with the least space (e.g. forcing this by marking the others readonly). But this still will not push the remaining copy pool tape close enough to 100% usage. So we have a choice between waiting for it to eventually fill up or taking a low usage tape off site. This is okay from time to time but it will eventually take its toll on our tape supply if we do this ad infinitum. And we cannot simply return it from off site later to append to it since that places us at risk.
With other storage pools, wherein reclamation is used, this is not a concern since on a weekly basis, we get back a fair number of the off-site tapes for reclaims, never mind the ones that never come back, but this is not possible with the aforementioned pool.
[ FANTASY SOLUTION ]
If would be nice if we could simply make a copy of a specific copy pool tape. Then that could be taken off site as a surrogate, and we then repeat that each week until the original tape is full, and then it goes off site, and all the temporary surrogates would then be brought back and recycled. We used to do this with another backup product that supported this since it allowed more granular control on a per tape basis -- very flexible and worked like a champ. We never ate through too many of these temporary copy tapes before the original filled up, so no big deal in terms of temporary tape consumption, and that allowed us to space out the deposits of the original copy tapes or minimally allow them to reach higher usage while still offering proper protection.
1. A backupset could be created, I suppose, but it doesn't look like it allows you to specify the input volumes, and it would capture all the data in the named file spaces, right ? I checked, and there are a number of file spaces on the current filling copy pool tape with the least remaining space. Having to run a full of all of those file spaces would take a long time. 2. Creating a second copy storage pool, and running a second "backup stgpool" command would end up trying to copy all the data in the primary pool all over again to the second copy pool as none of it would exist there yet.
We really need to be able to just make a copy of the incremental from any one tape.
[ ACTUAL SOLUTION ]
Would it be possible to modify our modus operandi to automate the process of creating a second copy pool volume for each "original" copy pool volume ? -- BUT without having to re-copy all the data in the primary pool as that would take a year !
Perhaps, we should have set this up from the beginning, but now it's too late ? Okay then, what would be the closest thing in TSM that we could do now to mitigate this problem going forward ? I guess that's the real question.