tape usages

henrychen

ADSM.ORG Member
Joined
Apr 9, 2003
Messages
20
Reaction score
0
Points
0
Website
http
need help here with what i think is a configuration issue ...



i have a tape stgpool defined as



"def stg on.tp.abc dc.tape.01 col=no maxscr=1000000"



and a disk stgpool defined as



"def stg on.di.abc disk next=on.tp.abc"



so, when the disk stgpool fills up to the hi mig mark it will push off to the tape stgpool and what i'm seeing is the process will claim 1+ scratch tapes in the tape library, then i end up with multiple tapes partially used and i have to manually run the 'move data' command to consolidate them ... is there a way i can make it use 1 scratch tape at a time, fill it up, then move on to another scratch tape? hope this makes sense. let me know if you need more information from me. thanks.



-henry
 
if migproc=1 then one tape is used for migration. Being first one filling tape and if none filling tapes are available migration will use one scratch tape.
 
i do see migproc for this disk stgpool is set to 1. reading the help on this parameter, it seems like the number of migration processes is dependent upon this setting as well as the number of nodes with data on the migrating stgpool. i do have many nodes of data writing to the same disk stgpool, is this what's causing the problem? if so, how can i fix this? thanks.



-henry
 
Migproc is used for the number of migration processes. The number of nodes writing to diskpool has nothing to do with that. In your case I would look at MAXSize and/or collocation.



For a closer look at your problem it will help to have more info: server versions, client versions, # clients, poolsizes (q stg <poolname> f=d) would be a good starting point.
 
i'm using TSM 5.1.1 for server and client. below are the q stgpool output for the disk pool and tape pool:



tsm: SERVER1>q stg on.di.abc f=d



Storage Pool Name: ON.DI.ABC

Storage Pool Type: Primary

Device Class Name: DISK

Estimated Capacity (MB): 614,400.0

Pct Util: 99.6

Pct Migr: 99.3

Pct Logical: 100.0

High Mig Pct: 0

Low Mig Pct: 0

Migration Delay: 0

Migration Continue: Yes

Migration Processes: 1

Next Storage Pool: ON.TP.ABC

Reclaim Storage Pool:

Maximum Size Threshold: No Limit

Access: Read/Write

Description:

Overflow Location:

Cache Migrated Files?: No

Collocate?:

Reclamation Threshold:

Maximum Scratch Volumes Allowed:

Delay Period for Volume Reuse:

Migration in Progress?: Yes

Amount Migrated (MB): 123,001.47

Elapsed Migration Time (seconds): 93,675

Reclamation in Progress?:

Volume Being Migrated/Reclaimed:

Last Update by (administrator): ADMIN

Last Update Date/Time: 07/29/2003 03:47:25

Storage Pool Data Format: Native

Copy Storage Pool(s):

Continue Copy on Error?:

CRC Data: No



tsm: SERVER1>q stg on.tp.abc f=d



Storage Pool Name: ON.TP.ABC

Storage Pool Type: Primary

Device Class Name: DC.TAPE.01

Estimated Capacity (MB): 108,555,110,977.9

Pct Util: 0.0

Pct Migr: 0.0

Pct Logical: 100.0

High Mig Pct: 90

Low Mig Pct: 70

Migration Delay: 0

Migration Continue: Yes

Migration Processes:

Next Storage Pool:

Reclaim Storage Pool:

Maximum Size Threshold: No Limit

Access: Read/Write

Description:

Overflow Location:

Cache Migrated Files?:

Collocate?: No

Reclamation Threshold: 60

Maximum Scratch Volumes Allowed: 1,000,000

Delay Period for Volume Reuse: 0 Day(s)

Migration in Progress?: No

Amount Migrated (MB): 0.00

Elapsed Migration Time (seconds): 0

Reclamation in Progress?: No

Volume Being Migrated/Reclaimed:

Last Update by (administrator): ADMIN

Last Update Date/Time: 07/12/2003 20:02:51

Storage Pool Data Format: Native

Copy Storage Pool(s):

Continue Copy on Error?:

CRC Data: No



i'm not familiar with the maxsize parameter and read the help for it. i do know we have the tape stgpool collocation set to 'no'. is there a configuration issue? thanks.



-henry
 
Henry,



Your post is a couple of weeks old, so if you've conquered your situation, please disregard...



I noticed that your High Mig Pct/Low Lig Pct on what appears to be your disk storage are both set to 0%...that's a problem. The disk will never be used, the backup process will have to automatically move to the next stgpool, which is tape, and since the tape pool has one million (a million???) scratches available (and each backup process that falls over from the disk pool needs storage), each session will require a tape. Try resetting the high/low percentages on the disk pool to start.



Another thing to consider...you still have one million (a million???) scratch tapes available in your tape pool. If I'm not mistaken, TSM prefers to use a new tape, rather than reusing a partial tape, if allowed (if anyone knows differently, please let me know)...try reducing the number of available scratch tapes to something more reasonable (maybe half a million?) and see if that helps...



Good luck... :p
 
Back
Top