Move Data command in TSM

ab00341890

Newcomer
Joined
Feb 23, 2016
Messages
4
Reaction score
0
Points
0
In our environment we are taking absolute File system backup directly on Tapes and we have plenty of low utilized medias in that particular storage pool ( they are just 0.6/1/2 percentage utilized), but when we are issuing the move data command to free those medias , it will take a media from scratch every time not from the media which is very less utilized in the same storage pool. Please needs an advice on this
 
In our environment we are taking absolute File system backup directly on Tapes and we have plenty of low utilized medias in that particular storage pool ( they are just 0.6/1/2 percentage utilized), but when we are issuing the move data command to free those medias , it will take a media from scratch every time not from the media which is very less utilized in the same storage pool. Please needs an advice on this

Hi !

This is probually due to "Collocation" use,

Look in to your tape storagepool, "q stgp<stgpoolname> f=d" and if "Collocate" = "filespace" or "node" need is why.
You can also see where the client/node data is with the cmd: query nodedata <nodename>

Restore from Tape will take a long time if a client / node's data is in a lot of tapes that are shared with other nodes so you should not do this if you don't really want and need.

Set Maximum Scratch Volumes Allowed = Number of Scratch Volumes Used;
dsmadmc update stg <stgpoolname> maxscratch=<low value>

And then try "move data" again!

Regards,
Nicke
 
Hello ...

Thank you so much reply ...in our environment on TAPE_STGP pool where we are running move data and the collocate is as Group.
q stgp TAPE_STGP f=d

Storage Pool Name: TAPE_STGP
Storage Pool Type: Primary
Device Class Name: LTO5
Storage Type: DEVCLASS
Cloud Type:
Cloud URL:
Cloud Identity:
Cloud Location:
Estimated Capacity: 622,983 G
Space Trigger Util:
Pct Util: 36.2
Pct Migr: 0.1
Pct Logical: 100.0
High Mig Pct: 90
Low Mig Pct: 70
Migration Delay: 0
Migration Continue: Yes
Migration Processes: 1
Reclamation Processes: 1
Next Storage Pool:
Reclaim Storage Pool:
Maximum Size Threshold: No Limit
Access: Read/Write
Description:
Overflow Location:
Cache Migrated Files?:
Collocate?: Group
Reclamation Threshold: 60
Offsite Reclamation Limit:
Maximum Scratch Volumes Allowed: 99,999
Number of Scratch Volumes Used: 127
Delay Period for Volume Reuse: 0 Day(s)
Migration in Progress?: No
Amount Migrated (MB): 0.00
Elapsed Migration Time (seconds): 0
Reclamation in Progress?: No
Last Update by (administrator): ADMIN
Last Update Date/Time: 06/29/17 13:02:26
Storage Pool Data Format: Native
Copy Storage Pool(s):
Active Data Pool(s):
Continue Copy on Error?: Yes
more... (<ENTER> to continue, 'C' to cancel)

CRC Data: No
Reclamation Type: Threshold
Overwrite Data when Deleted:
Deduplicate Data?: No

q node ABC_FILE_MLY f=d

Node Name: ABC_FILE_MLY
Platform: Linux x86-64
Client OS Level: 3.0.101-107-default
Client Version: Version 8, release 1, level 0.2
Application Version: Version 0, release 0, level 0.0
Policy Domain Name: ABSOLUTE_FILE_DOM
Last Access Date/Time: 10/16/17 17:00:04
Days Since Last Access: 1
Password Set Date/Time: 09/14/17 12:13:57
Days Since Password Set: 33
Invalid Sign-on Count: 0
Locked?: No
Contact:
Compression: Client
Archive Delete Allowed?: Yes
Backup Delete Allowed?: Yes
Registration Date/Time: 08/15/17 11:13:17
Registering Administrator:
Last Communication Method Used: Tcp/Ip
Bytes Received Last Session: 56,506.65 M
Bytes Sent Last Session: 28,019
Duration of Last Session: 1,155.25
Pct. Idle Wait Last Session: 8.94
Pct. Comm. Wait Last Session: 126.88
Pct. Media Wait Last Session: 0.03
Optionset:
URL:
Node Type: Client
Password Expiration Period:
Keep Mount Point?: No
Maximum Mount Points Allowed: 40
Auto Filespace Rename : No
Validate Protocol: No
TCP/IP Name: ntcsr301
TCP/IP Address: 10.8.45.98
Globally Unique ID: 1d.35.4e.98.6d.36.11.e0.86.7a.00.14.5e.1c.98.86
Transaction Group Max: 0
Data Write Path: ANY
Data Read Path: ANY
Session Initiation: ClientOrServer
High-level Address:
Low-level Address:
more... (<ENTER> to continue, 'C' to cancel)

Collocation Group Name:
Proxynode Target:
Proxynode Agent:
Node Groups:
Email Address:
Deduplication: ClientOrServer
Users allowed to back up: All
Role:
Role Override: UseReported
Processor Vendor:
Processor Brand:
Processor Type:
Processor Model:
Processor Count:
Hypervisor:
API Application:
Scan Error: Yes
MAC Address:
Replication State: None
Replication Mode: None
Backup Replication Rule: DEFAULT
Archive Replication Rule: DEFAULT
Space Management Replication Rule: DEFAULT
Replication Primary Server:
Last Replicated to Server:
Client OS Name: LNX:SUSE Linux Enterprise Server 11 (x86_64)
Client Processor Architecture: x64
Client Products Installed: BA
Client Target Version: (?)
Authentication: Local
SSL Required: Default
Split Large Objects: Yes
At-risk type: Bypassed
At-risk interval:
Utility URL:
Replication Recovery of Damaged Files: Yes
Decommissioned:
Decommissioned Date:
 
OK and Collocation = Group is default setting in these types of storagepools.
If you haven't defined "collocation groups" then collocate = group will function as collocate = node ...

If you first check where the node data is located for node "ABC_FILE_MLY"
"q nodedata ABC_FILE_MLY"

And then: update stgp TAPE_STGP maxscratch=120

And try: move data <tapeThatABCFileHAS>

When complete run a new: q nodedata ABC_FILE_MLY and see the difference

Continue with the other low filling tapes and when you complete then reset "max scratch"
update stgp TAPE_STGP maxscratch=99999

Also check during the move data if there is a new TSM tape Request: q req
If it is there might be a Unavailable tape or a previously checked out tape.

//Nicke
 
Thank you So much Nicke... yesterday it was not mounting the media from scratch after changing the MAX Scratch Count to 120 ...but today when i run move data its taking medias from scratch....Please advice
 
Thank you So much Nicke... yesterday it was not mounting the media from scratch after changing the MAX Scratch Count to 120 ...but today when i run move data its taking medias from scratch....Please advice

Previously it looked like you were using 127 scratch tapes and your reuse delay is 0 days. If move data freed up more than 7 tapes you will start to put new data right back on a scratch volume. And it could be one of the tapes you recently freed up as those tapes will return immediately to scratch.
 
Thanks.. So i need to put to MAXSCRATCH COUNT is again 10 120 as number of Scratch Volumes Used: 118 in that pool
 
Yeah, but that won't fix your problem of low utilization of tapes in the long run.
As Nicke posted above, it's likely due to the collocation setting on your tape pool.
Check out these two links and then decide how best to meet your requirements for restorations vs tape utilization.
https://www.ibm.com/support/knowled...1/com.ibm.itsm.srv.doc/t_colloc_planning.html
https://www.ibm.com/support/knowled...om.ibm.itsm.perf.doc/t_data_group_colloc.html
https://www.ibm.com/support/knowled...om.ibm.itsm.perf.doc/t_data_group_colloc.html
If you want to continue using collocation, and collocate it by group read up on that aspect as well as you'll need to define a collation group and members for that group.
 
Read the material that RecoveryOne referenced, that will help you understand better what is likely happening.

You can try this query to see which node(s) are on the volume you are moving:
Select distinct node_name from contents where volume_name=‘ABC123L7’

Then QUERY NODEDATA for the node(s) listed above to see if they have any filling volumes. If not, that's why it went to a scratch
 
I use this query to find nodes which aren't a member of a collocation group. Can be helpful for this type of issue.

Code:
select node_name, collocgroup_name from nodes where length(collocgroup_name) is null
 
Back
Top