How to conglomerate tapes ?

ldmwndletsm

ADSM.ORG Senior Member
Joined
Oct 30, 2019
Messages
232
Reaction score
5
Points
0
PREDATAR Control23

We have a bunch of primary pool tapes (same storage pool) that are partially written to -- nothing wrong with the tapes -- and we'd like to reduce them down to fewer tapes. Will "move data" accomplish this, if we did it one at a time (possibly setting all but one to readonly as we work through them) ? Or will "move data" simply use a scratch tape ?

I thought I read somewhere that TSM will use a scratch, but for something like "restore volume", it might append to a readwrite tape. We have more than enough scratch tapes, but if TSM always uses a scratch for the "move" then we'd end up with the same number of tapes in the end, thus accomplishing nothing. Perhaps, "move data" is the wrong tool for the job given that we are not trying to move the data due to concerns with the physical media, but nothing else occurs to me other than to just let the tapes fill up -- a real shame given that we don't need all those partially filled tapes, and all that data could probably fit on just a couple, thus freeing up 90% of those.
 
PREDATAR Control23

Hi,

Collocation may cause this. Check that from q stg XXXXX f=d

Reclaim will take of the nearly empty volumes if collocation is set to None, and the volumes are in FULL state.
 
PREDATAR Control23

Thank you, Trident. This storage pool does not use reclamation (Reclamation Threshold: 100). We do use collocation by group (Collocate?: Group). We have a couple of groups. The copy group settings for this pool are 'No Limit', so nothing ever expires, and there are unlimited versions of every backed up file. I counted up all the tapes in this storage pool, and 95+% of them have a pct_reclaim value of 0.0, not surprisng. There are some that are above 0.0, but not very many.

Backups are written to disk volumes (one for each storage pool) first and then migrated to tape later. We did have a problem wherein the disk volume for this storage pool had an issue, and a bunch of scratch tapes ended up getting used, all in a short period of time before the problem was resolved. So the backup server simply started mounting a lot of tapes to do the backups since it could not use disk. Prior, we would never see very many tapes in a filling status so it was never a big deal.

So in the aftermath of the problem, the question now is how can we conglomerate these tapes down to a fewer number since most of those are only partially full ? It would seem that based on the IBM documentation for "reclaim stgpool", that this will not be possible since the specified threshold must be 1-99, and all of these are at 0.0.

1. Is my underastanding correct here ?

2. Would there be some other method ?

3. I wasn't clear where you said:

Reclaim will take of the nearly empty volumes if collocation is set to None, and the volumes are in FULL state.

Assuming some of these tapes instead had a pct_reclaim value of 1 or greater then would they need to be 'FULL' for us to use the manual "reclaim stgpool" method ? Or can they still be FILLING ? Also, why would collocation matter here in that scenario ?
 
PREDATAR Control23

Hi,
If your tapes are in filling state, then your option is to manually start move data for each tape. Keep in mind that group collocation will be honored as long as you have scratch tapes available. To prevent the use of more scratch tapes, you can make sure that:

Maximum Scratch Volumes Allowed is less or equal to Number of Scratch Volumes Used

This has to be decreased as you move along and move data

Modify the statement below to suit your stgpool, and start by moving the tapes at the top of the list. By default, move command will write to a volume in the same stg. For a large number of small tapes, maybe make a filclass pool, and move the data there before migrating to tapes at a later stage.

Code:
Protect: TSM>select cast(VOLUME_NAME as char(12)),PCT_UTILIZED,status from volumes where stgpool_name='TAPESTG' order by 2

Unnamed[1]         PCT_UTILIZED     STATUS                                                                                                                           
-------------     -------------     --------------------------------------------------------------
800261L8                    0.1     FILLING                                                                                                                         
800203L8                    6.5     FILLING                                                                                                                         
800264L8                   18.1     FILLING                                                                                                                         
800257L8                   25.8     FILLING                                                                                                                         
800218L8                   30.5     FILLING                                                                                                                         
800104L8                   42.6     FULL                                                                                                                             
800002L8                   45.2     FULL                                                                                                                             
800083L8                   58.5     FULL                                                                                                                             
800001L8                   59.7     FULL                                                                                                                             
800025L8                   61.3     FULL                                                                                                                             
800013L8                   63.1     FULL                                                                                                                             
800003L8                   63.6     FULL                                                                                                                             
800004L8                   64.1     FULL                                                                                                                             
800098L8                   78.9     FULL
 
PREDATAR Control23

Thank you, again, Trident. That was very helpful :) I did run that command, and we have a number of tapes with 0.0 (probably some data but not enough to round it up to a whole number), 1.1, 1.2, etc., before hitting the ones with more substantial utilization. I have some more questions for corroboration.

All the nodes that write to this storage pool are in one of three collocation groups, and there are no nodes that are not assigned to one of those three. I verified that. We have never changed the "MAXSCRatch" setting for the pool, and it's currently far greater than the number of volumes in the pool.

1.
To prevent the use of more scratch tapes, you can make sure that:
Maximum Scratch Volumes Allowed is less or equal to Number of Scratch Volumes Used

If we have any tapes in this storage pool that are not in the tape library then should these be included in the number of volumes used or only the ones in the tape library ?

For example, if there are 200 tapes in the pool, but 50 are stored outside the tape library (we do not use reclamation on this pool so some of the older full primary pool tapes are boxed in order to free up storage slots) then would you still set MAXSCRatch to 200 or only to 150 ?

2. So as we work our way through, if we always change MAXSCRatch first to match the number of volumes in the pool then TSM will not be able to load a scratch tape to move any data and will instead be forced to use only the existing filling volumes. Is that right ?

In our case, we have plenty of filling tapes with nodes in each of the collocation groups, so we should be okay for a while as we progress.

3. Let's say we had 200 volumes in the pool, and the MAXSCRatch was set to 1000. If we change that to 200, and then run a "move data" on a volume, then once the move is completed, we would then have 199 volumes (1 volume now pending the reusedelay for the pool). If we ran another "move data" on another volume, without first decreasing the MAXSCRatch from 200 to 199, then TSM *could* load 1 scratch tape if it thought it needed one.

So to be safe, every time before running "move data" we would need to first always decrement the MAXSCRatch by 1 (or whatever number was necessary to match the number of volumes used), and so on and so forth, just as long as we have enough filling tapes with nodes in each of the collocation groups to accommodate backups. Otherwise, failure to reduce the MAXSCRatch accordingly *could* result in more scratch tapes getting used during the move(s), null and voiding the benefit. That right

4. But even if MAXSCRatch was greater than the number of volumes in the pool, would TSM load a scratch tape for a "move data" if there was at least one filling tape (readwrite) with nodes in that collocation group ?

5. You mentioned the possible use a file class volume(s) and then migrate to tape later. What about using a disk volume ? Any preference ?
 
PREDATAR Control23

Hi,

I have tried to answer your questions below in bold text.

Rgds,
Trident


Thank you, again, Trident. That was very helpful :) I did run that command, and we have a number of tapes with 0.0 (probably some data but not enough to round it up to a whole number), 1.1, 1.2, etc., before hitting the ones with more substantial utilization. I have some more questions for corroboration.

All the nodes that write to this storage pool are in one of three collocation groups, and there are no nodes that are not assigned to one of those three. I verified that. We have never changed the "MAXSCRatch" setting for the pool, and it's currently far greater than the number of volumes in the pool.

1.

If we have any tapes in this storage pool that are not in the tape library then should these be included in the number of volumes used or only the ones in the tape library ?

For a primary pool, all tapes should be in the library. Any reclaim/move cannot work on tapes that are not present. Only copy pools can be removed. The reclaim process for these will make new tapes based on primary pool, and then you can return the empty tapes to the library. This is the DRM part of SP.

For example, if there are 200 tapes in the pool, but 50 are stored outside the tape library (we do not use reclamation on this pool so some of the older full primary pool tapes are boxed in order to free up storage slots) then would you still set MAXSCRatch to 200 or only to 150 ?

Again, we can only work on the tapes that are present. But yes, set it to something less than 200. If these have a low filling rate, you can consolidate these by moving data. Either remove the collocation from from nodes, or reduce maxscr to a lower number.


2. So as we work our way through, if we always change MAXSCRatch first to match the number of volumes in the pool then TSM will not be able to load a scratch tape to move any data and will instead be forced to use only the existing filling volumes. Is that right ?

Yes. Collocation will always try to either write the nodedata to the smallest number of tapes. For a DR purpose, this is wise as restore of data will require fewer tape mounts.

In our case, we have plenty of filling tapes with nodes in each of the collocation groups, so we should be okay for a while as we progress.

3. Let's say we had 200 volumes in the pool, and the MAXSCRatch was set to 1000. If we change that to 200, and then run a "move data" on a volume, then once the move is completed, we would then have 199 volumes (1 volume now pending the reusedelay for the pool). If we ran another "move data" on another volume, without first decreasing the MAXSCRatch from 200 to 199, then TSM *could* load 1 scratch tape if it thought it needed one.

It may. Depends upon the enviroment. If your single task is to reduce number of tapes, set maxscr to 150 and start filling up existing volumes in filling state.

For the tapes with lowest util, issue move data, and you will read data from that tape and write it to a tape in filling state in the same pool.


So to be safe, every time before running "move data" we would need to first always decrement the MAXSCRatch by 1 (or whatever number was necessary to match the number of volumes used), and so on and so forth, just as long as we have enough filling tapes with nodes in each of the collocation groups to accommodate backups. Otherwise, failure to reduce the MAXSCRatch accordingly *could* result in more scratch tapes getting used during the move(s), null and voiding the benefit. That right

Yes

4. But even if MAXSCRatch was greater than the number of volumes in the pool, would TSM load a scratch tape for a "move data" if there was at least one filling tape (readwrite) with nodes in that collocation group ?

If you have 50 nodes and each node has less data than the capacity of a tape, only 50 tapes would be needed.

5. You mentioned the possible use a file class volume(s) and then migrate to tape later. What about using a disk volume ? Any preference ?

You can migrate from DISK->FILECLASS. Not sure if you are allowed to migrate from FILECLASS -> DISK. I know you can move data in that direction.
 
PREDATAR Control23

Thank you, again for all you assistance with this. :) I have a more general question on maxscratch and moving data, but I will submit that as a new post.
 
Top