Migration failing dure to insufficient mount points

cozcol

ADSM.ORG Member
Joined
Sep 11, 2008
Messages
107
Reaction score
0
Points
0
Location
Auckland, New Zealand
Hi Experts,

when I run a migration manually :

ANR1134E Migration is ended for storage pool FILE_STG01.
There is an insufficient number of mount points available
for removable media. (SESSION: 8947, PROCESS: 8864)
12/30/2009 16:27:19 ANR0985I Process 8864 for MIGRATION running in the
BACKGROUND completed with completion state FAILURE at
04:27:19 PM. (SESSION: 8947, PROCESS: 8864)
12/30/2009 16:27:19 ANR4935I Migration of primary storage pool FILE_STG01 has
ended. Files migrated: 0, Bytes migrated: 0, Unreadable
Files: 0. (SESSION: 8947)


it fails with the following error:


ANR1134E Migration is ended for storage pool FILE_STG01.
There is an insufficient number of mount points available
for removable media. (SESSION: 8947, PROCESS: 8864)
12/30/2009 16:27:19 ANR0985I Process 8864 for MIGRATION running in the
BACKGROUND completed with completion state FAILURE at
04:27:19 PM. (SESSION: 8947, PROCESS: 8864)
12/30/2009 16:27:19 ANR4935I Migration of primary storage pool FILE_STG01 has
ended. Files migrated: 0, Bytes migrated: 0, Unreadable
Files: 0. (SESSION: 8947)


The migration just dies straight away.

Not sure what I can do next to fix this.

Backups are not working as server media mount is not possible..


Disk are full and need to migrate to tape of which I have added 5 new scratch into the library but need to force a MIGRATION to tape.
 
Might need a little more info to diagnose this...

What value of MigProcesses is set for the stgpool (q stgp f=d)?

How many mountpoints are free in the devclass for the next stgpool?
 
As tony said: A little more information would be nice.
Are your drives online?
Are your paths online?
Are there any tapes in the drives when attempting to migrate?
 
You should also make sure the tape drives are still up to your server O/S and that the paths defined to TSM have not changed on the O/S side.
 
All drive and paths are on-line.

I did make a small change to the devclass for the mount points for the primary disks.

Increased this from 60 to 80, this is the only change which has been made and seems to have upset things.

q devclass
Count (MB)
--------- ---------- ------- --------- ------ -------- ------
DISK Random 1
FILEDEVC Sequential 1 FILE DRIVE 2,048.0 80
TS3100 Sequential 2 LTO DRIVE 10,000.0 DRIVES


For DEVCLASS FILEDEVC I increased the mount point from 60 --> 80.

I have tried to reduce this back to 60 (although TSM may have used 80 2GB mount points to backup data.

But Migration still wouldn't work and complained about insufficient mount points.

Scratch tapes have been loaded so that doesn't seem to be an issue.

q path
Source Name Source Type Destination Destination On-Line
Name Type
----------- ----------- ----------- ----------- -------
TSM_PROD01 SERVER TS3100 LIBRARY Yes
TSM_PROD01 SERVER DRIVE1 DRIVE Yes
TSM_PROD01 SERVER DRIVE2 DRIVE Yes

tsm: TSM_PROD01>q drive
Library Name Drive Name Device Type On-Line
------------ ------------ ----------- -------------------
TS3100 DRIVE1 LTO Yes
TS3100 DRIVE2 LTO Yes



Still migration bombs out with the same errors

migrate stgpool file_stg01 lo=0 wait=yes
ANR0984I Process 7 for MIGRATION started in the FOREGROUND at 10:57:43 AM.
ANR2110I MIGRATE STGPOOL started as process 7.
ANR1000I Migration process 7 started for storage pool FILE_STG01 manually, highMig=5,
lowMig=0, duration=None.
ANR1100I Migration started for volume /opt/tivoli/tsm/stg01/0000CAC7.BFS, storage pool
FILE_STG01 (process number 7).
ANR1102I Removable volume /opt/tivoli/tsm/stg01/0000CAC6.BFS is required for migration.
ANR1102I Removable volume /opt/tivoli/tsm/stg01/0000CAC7.BFS is required for migration.
ANR1102I Removable volume /opt/tivoli/tsm/stg01/0000CAC8.BFS is required for migration.
ANR1102I Removable volume /opt/tivoli/tsm/stg01/0000CAC9.BFS is required for migration.
ANR1102I Removable volume /opt/tivoli/tsm/stg01/0000CACA.BFS is required for migration.
ANR1122E Migration is ended for volume /opt/tivoli/tsm/stg01/0000CAC7.BFS. An insufficient
number of mount points are available for removable media.
ANR1134E Migration is ended for storage pool FILE_STG01. There is an insufficient number of
mount points available for removable media.
ANR0985I Process 7 for MIGRATION running in the FOREGROUND completed with completion state
FAILURE at 10:57:43 AM.
ANR4935I Migration of primary storage pool FILE_STG01 has ended. Files migrated: 0, Bytes
migrated: 0, Unreadable Files: 0.
ANS8001I Return code 4.


Tricky one this one as seems to be related to mountpoints of disks.

/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 83.7 Full
000CAC5.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 60.5 Full
000CAC6.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CAC7.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CAC8.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CAC9.BFS



TAPE PATH DETAILS - Nothing has changed on Linux server.

q path f=d
Source Name: TSM_PROD01
Source Type: SERVER
Destination Name: TS3100
Destination Type: LIBRARY
Library:
Node Name:
Device: /dev/IBMchanger0
External Manager:
LUN:
Initiator: 0
Directory:
On-Line: Yes
Last Update by (administrator): ADMIN
Last Update Date/Time: 11/19/2009 11:55:07
Source Name: TSM_PROD01
Source Type: SERVER
Destination Name: DRIVE1
Destination Type: DRIVE
Library: TS3100
Node Name:
Device: /dev/IBMtape0
External Manager:
LUN:
Initiator: 0
Directory:
On-Line: Yes
Last Update by (administrator): ADMIN
Last Update Date/Time: 11/19/2009 12:00:21
Source Name: TSM_PROD01
Source Type: SERVER
Destination Name: DRIVE2
Destination Type: DRIVE
Library: TS3100
Node Name:
Device: /dev/IBMtape1
External Manager:
LUN:
Initiator: 0
Directory:
On-Line: Yes
Last Update by (administrator): ADMIN
Last Update Date/Time: 11/19/2009 12:00:39


 
Last edited:
Might need a little more info to diagnose this...

What value of MigProcesses is set for the stgpool (q stgp f=d)?

How many mountpoints are free in the devclass for the next stgpool?

Thanks for you're assistance Tony.

Here are the details for my Primary stgpool - Migration Processes: 1


Storage Pool Name: FILE_STG01
Storage Pool Type: Primary
Device Class Name: FILEDEVC
Estimated Capacity: 644 G
Space Trigger Util: 98.7
Pct Util: 70.3
Pct Migr: 70.3
Pct Logical: 100.0
High Mig Pct: 5
Low Mig Pct: 1
more... (<ENTER> to continue, 'C' to cancel)
Migration Delay: 1
Migration Continue: Yes
Migration Processes: 1
Reclamation Processes: 1
Next Storage Pool: LTOPOOL01
Reclaim Storage Pool:
Maximum Size Threshold: No Limit
Access: Read/Write
Description: File Storage Pool 01
Overflow Location:
Cache Migrated Files?:
Collocate?: No
Reclamation Threshold: 100
Offsite Reclamation Limit:
Maximum Scratch Volumes Allowed: 99,999
Number of Scratch Volumes Used: 224
Delay Period for Volume Reuse: 0 Day(s)
Migration in Progress?: No
Amount Migrated (MB): 0.00
Elapsed Migration Time (seconds): 0
Reclamation in Progress?: No
Last Update by (administrator): ADMIN
Last Update Date/Time: 12/14/2009 15:05:49
Storage Pool Data Format: Native
Copy Storage Pool(s):
Active Data Pool(s):
Continue Copy on Error?: Yes
CRC Data: No
Reclamation Type: Threshold
Overwrite Data when Deleted:
Storage Pool Name: LTOPOOL01
Storage Pool Type: Primary
Device Class Name: TS3100
Estimated Capacity: 75,461,878 G


Mount point available in the next storage pool - hmm how can I answer that 1. The next storage pool has 4 scratch tapes available and has run a DB Backup to tapes this morning. So all drive are ok. I have run an audit on the library and it checked out ok!.



Storage Pool Name: LTOPOOL01
Storage Pool Type: Primary
Device Class Name: TS3100
Estimated Capacity: 75,461,878 G
Space Trigger Util:
Pct Util: 0.0
Pct Migr: 0.0
Pct Logical: 100.0
High Mig Pct: 98
Low Mig Pct: 5
Migration Delay: 0
Migration Continue: Yes
Migration Processes: 1
Reclamation Processes: 1
Next Storage Pool:
Reclaim Storage Pool:
Maximum Size Threshold: No Limit
Access: Read/Write
Description: LTO4 Tape Pool
Overflow Location:
Cache Migrated Files?:
more... (<ENTER> to continue, 'C' to cancel)
Collocate?: No
Reclamation Threshold: 100
Offsite Reclamation Limit:
Maximum Scratch Volumes Allowed: 99,999
Number of Scratch Volumes Used: 43
Delay Period for Volume Reuse: 0 Day(s)
Migration in Progress?: No
Amount Migrated (MB): 0.00
Elapsed Migration Time (seconds): 0
Reclamation in Progress?: No
Last Update by (administrator): ADMIN
Last Update Date/Time: 12/16/2009 16:09:02
Storage Pool Data Format: Native
Copy Storage Pool(s): OFFSITE
Active Data Pool(s):
Continue Copy on Error?: Yes
CRC Data: No
Reclamation Type: Threshold
Overwrite Data when Deleted:
 
Last edited:
As tony said: A little more information would be nice.
Are your drives online?
Are your paths online?
Are there any tapes in the drives when attempting to migrate?

Cheers Fuzzballer

Drive are online
Paths are online
No Tapes in the drives when Migration runs.

confused.com I am.
 
You should also make sure the tape drives are still up to your server O/S and that the paths defined to TSM have not changed on the O/S side.

Hi SuperMOM

Drives are up
Paths are defined

I have removed path'ing and re-defined the paths to the drives. All is ok but still no migration.

I feel I have upset mount limits for disks on DEVCLASS FILEDEVC (Disk storage pool) I can revert back to mountl=60 but TSM has used all the mount points when I set to 80.
 
Last edited:
hi

On Which stgpool do you want to migrate the data?

usually we migrate the data from diskpool to tapepool...

can you paste do q stg diskpool f=d?
 
yes I need to migrate my disks to tape.

SM_PROD01>q stg diskpool f=d
ANR2034E QUERY STGPOOL: No match found using this criteria.
ANS8001I Return code 11.
tsm: TSM_PROD01>q stgpool
Storage Device Estimated Pct Pct High Low Next Stora-
Pool Name Class Name Capacity Util Migr Mig Mig ge Pool
Pct Pct
----------- ---------- ---------- ----- ----- ---- --- -----------
DISK_STG01 DISK 0.0 M 0.0 0.0 90 70 LTOPOOL01
FILE_STG01 FILEDEVC 644 G 70.3 70.3 5 0 LTOPOOL01
LTOPOOL01 TS3100 75,461,878 0.0 0.0 98 5
G
OFFSITE TS3100 78,726,243 0.0
G

As you see my FILE_STG01 Disk pool is 70% util but fails to migrate to LTOPOOL01 due to no mounts when scratch tapes are available and were added recently.

This is basically what I changed for the Devclass recently.


tsm: TSM_PROD01>q devclass
Device Device Storage Device Format Est/Max Mount
Class Access Pool Type Capacity Limit
Name Strategy Count (MB)
--------- ---------- ------- --------- ------ -------- ------
DISK Random 1
FILEDEVC Sequential 1 FILE DRIVE 2,048.0 60
TS3100 Sequential 2 LTO DRIVE 10,000.0 DRIVES
tsm: TSM_PROD01>update devclass FILEDEVC mountl=80
ANR2205I Device class FILEDEVC updated.
tsm: TSM_PROD01>q devclass
Device Device Storage Device Format Est/Max Mount
Class Access Pool Type Capacity Limit
Name Strategy Count (MB)
--------- ---------- ------- --------- ------ -------- ------
DISK Random 1
FILEDEVC Sequential 1 FILE DRIVE 2,048.0 80
TS3100 Sequential 2 LTO DRIVE 10,000.0 DRIVES
 
Thanks for you're assistance Tony.

Here are the details for my Primary stgpool - Migration Processes: 1


Storage Pool Name: FILE_STG01
Storage Pool Type: Primary
Device Class Name: FILEDEVC
Estimated Capacity: 644 G
Space Trigger Util: 98.7
Pct Util: 70.3
Pct Migr: 70.3
Pct Logical: 100.0
High Mig Pct: 5
Low Mig Pct: 1
more... (<ENTER> to continue, 'C' to cancel)
Migration Delay: 1
Migration Continue: Yes

Migration Delay is set to 1.
24hr must pass before the data can be migrated to the next storage pool.

Update the migration delay to zero and then you can migrate every thing from the disk pool to the tape pool.

Good Luck,
Sias
 
here your are migrating from tapepool to tapepool (both the stgpools are sequential access) so we need two drives for each migration process.

and also you need to check the parameter in dsmserv.opt file

noreclmigr

above parameter would not run migration and reclamations so please comment this parameter in the dsmserv.opt file...
 
Thanks for your advice.

I have updated this setting on FILE_STG01 to Migdelay=0


But still get the error when I try to migrate the data from disk to tape.
 
here your are migrating from tapepool to tapepool (both the stgpools are sequential access) so we need two drives for each migration process.

and also you need to check the parameter in dsmserv.opt file

noreclmigr

above parameter would not run migration and reclamations so please comment this parameter in the dsmserv.opt file...



I will check these out - but this strange as this has been running fine for over a year, then suddenly (abielt after increasing a devclass mount limit) it is now suddenly failing to migrate to tape pool.

But I am trying to get Primary disk pool to migrate to primary tape pool. As I understand it.
 
here your are migrating from tapepool to tapepool (both the stgpools are sequential access) so we need two drives for each migration process.

and also you need to check the parameter in dsmserv.opt file

noreclmigr

above parameter would not run migration and reclamations so please comment this parameter in the dsmserv.opt file...



I do have 2 drives available - so thats not an issue


and the part for dsmserv.opt file - I do not have that option in my file so not an issue either.
 
I do have 2 drives available - so thats not an issue


and the part for dsmserv.opt file - I do not have that option in my file so not an issue either.



I am trying to migrate disk to tapepool - but it is failing. Worked just fine until I increased the mount limit for the disk pool so more mount points (2Gb) would be available.
 
What happens when you reduce the mount limit for file_stg01 again? Do you really need it set so high?

I just wonder if you're hitting some sort of file limit but am not really sure, your setup from what I have seen looks ok.

I really doubt this is it, but you could try updating ltopool01's copystgpool to be nothing rather than "offsite". (upd stg ltopool01 copystgpool=whatever)
 
here your are migrating from tapepool to tapepool (both the stgpools are sequential access) so we need two drives for each migration process.

and also you need to check the parameter in dsmserv.opt file

noreclmigr

above parameter would not run migration and reclamations so please comment this parameter in the dsmserv.opt file...

yes they are both sequential but it is setup as primary disk file_stg01 LTO pool ltopool01
 
Back
Top