Why do onsite tapes not go to 100 % used ?

buster99

ADSM.ORG Member
Joined
May 14, 2008
Messages
148
Reaction score
1
Points
0
Hello, I'm running into an issue with my backups. They recently started using more tapes than usual for both onsite and offsite backups. While looking into this, I noticed that there are a large number of onsite tapes in the library that have space on them but they haven't been written to in quite a long time. A few years in some cases. Is there a way that I can "tell" TSM to use these tapes before it uses empty scratch tapes ?
Thank you

Here is an example of a tape that is currently in the library but not being used for new backups:

Volume Name: 016027L4
Storage Pool Name: ONSITE_TP
Device Class Name: LTO4
Estimated Capacity: 1.5 T
Scaled Capacity Applied:
Pct Util: 0.4
Volume Stats: Filling
Access: Read/Write
Pct. Reclaimable Space: 0.0
Scratch Volume?: Yes
In Error State?: No
Number of Writable Sides: 1
Number of Times Mounted: 9
Write Pass Number: 1
Approx. Date Last Written: 06/02/14 22:12:41
Approx. Date Last Read: 04/16/16 17:30:21
Date Became Pending:
Number of Write Errors: 0
Number of Read Errors: 0
Volume Location:
Volume is MVS Lanfree Capable : No
Last Update by (administrator):
Last Update Date/Time: 06/19/14 15:39:45
Begin Reclaim Period:
End Reclaim Period:
Drive Encryption Key Manager: Library
Logical Block Protected: No
 
Please check the collocation option in the Storage Pool (q stg STGNAME f=d)

Set it to Group
 
Here is how TSM selects volumes to write to:
https://www.ibm.com/support/knowledgecenter/SSGSG7_7.1.1/com.ibm.itsm.srv.doc/c_colloc_disabled.html

So the first tape it picks is filling, the 2nd is a scratch.


If your backups are going directly to tape and you have multiple sessions writing at the same time, each will need a tape, so that will end up with more tapes in filling than if going to a disk pool first and migrate to tape.

Hello marclant, thank you for the reply. I already have the stgpools set to "collocate: no" while the nodes are set to 1 of 6 collocation groups. I would think that tapes that are 30 % used would be used first instead of sitting idle for long periods of time.

Do you see it differently ?

Thank you
 
Please check the collocation option in the Storage Pool (q stg STGNAME f=d)

Set it to Group

Balirake, Thank you for the reply. I'm not sure I want to make that change across the board. I have 1000's of onsite/offsite tapes that would be affected. We currently collocate at the node level with 3 PRO and 3 DEV options.
 
Do you see it differently ?
My opinion doesn't matter. I have to defer to the manual, it takes a filling first, then continues onto scratch tapes. I suspect that it's to speed up the write, otherwise it would be very slow to mount and dismount multiple filling tapes. Unless you are running low on scratch tapes, I would not worry about it, they will get used eventually.

If you are running low on scratch tapes, you can use MOVE DATA to move the data to other filling tapes.

I already have the stgpools set to "collocate: no" while the nodes are set to 1 of 6 collocation groups.
We currently collocate at the node level with 3 PRO and 3 DEV options.
If you already have "collocate: no" at the storage pool, you are no longer using collocation. However if collocation was enabled at some point in the past, that would have consumed more filling tapes then, so it will take a while to use them all since it's always one filling, followed by scratch.
 
Back
Top