Migration failing dure to insufficient mount points

What happens when you reduce the mount limit for file_stg01 again? Do you really need it set so high?

I just wonder if you're hitting some sort of file limit but am not really sure, your setup from what I have seen looks ok.

I really doubt this is it, but you could try updating ltopool01's copystgpool to be nothing rather than "offsite". (upd stg ltopool01 copystgpool=whatever)

yes this is what I'm thinking BBB. I tried to lower it back to 60 - but I currently have more than this mounted in the disk storage pool. i ran migration but still complains
 
hi

On Which stgpool do you want to migrate the data?

usually we migrate the data from diskpool to tapepool...

can you paste do q stg diskpool f=d?

I can see you're point but DISK_STG01 is just random pool never gets written too.

FILE__STG01 is what my scripts migrate daily to tapepool then to OFFSITE copy pool so this is my pimary disk pool - slightly confusing setup I know.
 
Last edited:
Backups are still faliing on me with error

ANR0535W Transaction failed for session 10 for node
MP-MAN-L-P1 (Linux86) - insufficient mount points

I think I will have to delete all drives and paths (maybe library too) and re-create them, only solution I have seen for this problem and error code

ANR1122E Migration is ended for volume
ANR1134E Migration is ended for storage pool FILE_STG01.
There is an insufficient number of mount points available
for removable media. (SESSION: 26, PROCESS: 15)


It seems a few people are sufferng from this error ANR1134E


OK I have deleted all paths and drives and re-created them.

Volumes are being checked back in to the library so tape drive are fine. Devclass is LTO and not generic.

Seems to be related to virtual disks which are mounted and cannot mount anymore.

Damn I'm stuck with this one.
 
Last edited:
This is my problem.

Have a lot of disks mounted and cannot migrate them off disk


000CC81.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CC85.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CC89.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CC8D.BFS
more... (<ENTER> to continue, 'C' to cancel)

/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CC91.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CC95.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CC99.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CC9D.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCA1.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCA5.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCA9.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCAD.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCB1.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCB5.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCB9.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCBD.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCC1.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCC5.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCC9.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCCD.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCD1.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCD5.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCD9.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCDD.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCE1.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCE5.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCE9.BFS
more... (<ENTER> to continue, 'C' to cancel)

/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCED.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCF1.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCF5.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCF9.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CCFD.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD01.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD05.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD09.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD0D.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD11.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD15.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD19.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD20.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD24.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD28.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 64.5 Filling
000CD2C.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 95.5 Filling
000CD30.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD34.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD3B.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD3F.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD43.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD47.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD4B.BFS
more... (<ENTER> to continue, 'C' to cancel)

/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD4F.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD53.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD57.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD5B.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD5F.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD63.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD67.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD6B.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD6F.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD73.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD77.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD7B.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD7F.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD83.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD87.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD8B.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD8F.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD93.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD97.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD9B.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CD9F.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CDA3.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CDA7.BFS
more... (<ENTER> to continue, 'C' to cancel)

/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CDAB.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CDAF.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CDB3.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CDB7.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CDBB.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CDBF.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CDC3.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CDC7.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CDCB.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CDCF.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CDD3.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 98.2 Filling
000CDD7.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 55.1 Filling
000CDDB.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 76.0 Filling
000CDE5.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CDE9.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 15.6 Filling
000CDED.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
000CDF1.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 90.4 Filling
000CDF5.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 84.5 Filling
000CDFC.BFS
 
ANR1134E Error

TSM ERROR

ANR1134E Migration is ended for storage pool FILE_STG01. There is an insufficient number of mount points available for removable media.


tsm: TSM_PROD01>q devclass f=d
ANR2017I Administrator ADMIN issued command: QUERY DEVCLASS f=d

Device Class Name: DISK
Device Access Strategy: Random
Storage Pool Count: 1
Device Type:
Format:
Est/Max Capacity (MB):
Mount Limit:
Mount Wait (min):
Mount Retention (min):
Label Prefix:
Library:
Directory:
Server Name:
Retry Period:
Retry Interval:
Shared:
High-level Address:
Minimum Capacity:
WORM: No
Drive Encryption:
Scaled Capacity:
Last Update by (administrator):
Last Update Date/Time: 09/16/2008 11:37:49

Device Class Name: FILEDEVC
Device Access Strategy: Sequential
Storage Pool Count: 1
Device Type: FILE
Format: DRIVE
Est/Max Capacity (MB): 2,048.0
Mount Limit: 60
Mount Wait (min):
Mount Retention (min):
Label Prefix:
Library: TS3100
Directory: /opt/tivoli/tsm/stg01
Server Name:
Retry Period:
Retry Interval:
Shared:
High-level Address:
Minimum Capacity:
WORM: No
Drive Encryption:
Scaled Capacity:
more... (<ENTER> to continue, 'C' to cancel)

Last Update by (administrator): ADMIN
Last Update Date/Time: 01/06/2010 09:40:52

Device Class Name: TS3100
Device Access Strategy: Sequential
Storage Pool Count: 2
Device Type: LTO
Format: DRIVE
Est/Max Capacity (MB): 921,600.0
Mount Limit: DRIVES
Mount Wait (min): 60
Mount Retention (min): 60
Label Prefix: LTO4
Library: TS3100
Directory:
Server Name:
Retry Period:
Retry Interval:
Shared:
High-level Address:
Minimum Capacity:
WORM: No
Drive Encryption: Allow
Scaled Capacity:
Last Update by (administrator): ADMIN
Last Update Date/Time: 01/06/2010 11:09:17
 
Hi again, sorry for the delay - been off having fun.

I think BBB mentioned the synchronous offsite copy you're taking (the copystgpool param of your onsite LTO pool). I'm wondering if that is interacting with mountpoint availability in your disk pool?

With that synchronous offsiting you'll need 2 mountpoints in the LTO pool for each mountpoint in the seq disk pool thats being migrated. Looks like the first file being migrated spans multiple seq disk volumes (5 or so, was it?) - possibly when you increased the number of seq disk mountpoints you reduced pressure on those mounts such that the migration process could attempt to use more LTO drives than you have...to be honest I'm not exactly sure how (perhaps after I've had a coffee).

I guess my only $0.02 for this morning are:

1. Try disabling the synchronous copy by setting your copystgpool to null as BBB has suggested
2. I'm not convinced this will help but you migth want to reduce the mount retention on your LTO to 0

Cheers,

T
 
I would like to eliminate this also.

I have set the copystgpool to offsite - maybe this is not correct as I should set to Null...BUT......not defined anywhere?

tsm: TSM_PROD01>update stg LTOPOOL01 copystgpool=null
ANR4731E UPDATE STGPOOL: The copy storage pool NULL is not defined.
ANS8001I Return code 11.
 
Migration Delay is set to 1.
24hr must pass before the data can be migrated to the next storage pool.

Update the migration delay to zero and then you can migrate every thing from the disk pool to the tape pool.

Good Luck,
Sias


No Luck, I updated this value to 0 - but cannot force a migration.

Same error

ANR1122E Migration is ended for volume /opt/tivoli/tsm/stg01/0000CAC7.BFS. An insufficient number of mount points are available
for removable media.
ANR1134E Migration is ended for storage pool FILE_STG01. There is an insufficient number of mount points available for removable
media.
 
I would like to eliminate this also.

I have set the copystgpool to offsite - maybe this is not correct as I should set to Null...BUT......not defined anywhere?

tsm: TSM_PROD01>update stg LTOPOOL01 copystgpool=null
ANR4731E UPDATE STGPOOL: The copy storage pool NULL is not defined.
ANS8001I Return code 11.

Hey hey,

Sorry - try this command:

Code:
upd stgp ltopool01 copystg=""

T
 
Hi again,

I'm curious about your mountpoint utilisation too - mind posting the output of this SQL?

Code:
select library_name, drive_state, count(drive_state) from drives group by library_name, drive_state

Cheers,

T
 
ok LTOPOOL01 has been set to null.

Here is the output from the SQL statement.

tsm: TSM_PROD01>select library_name, drive_state, count(drive_state) from drives group by library_name, drive_state
LIBRARY_NAME DRIVE_STATE Unnamed[3]
------------------ ------------------ -----------
TS3100 EMPTY 2
 
What appears in your server activity log when you try to do the backup and it fails?

How are things working now you've set the copypool to nothing?

How many volumes do you have in the file pool? Why are they all 2GB if you have a really large number? It may be related to that.

Can you post the output of "q mount".

Halting the server and restarting it will clear any mounts that have hung for some reason.
 
Hmm - ok that's a wee bit confusing...

The pool you're trying to migrate has a sequential access strategy, but there's no separate library or drives associated with it. Admittedly I've not used sequential disk pools much, but I was expecting to see some output from the SQL showing how many drives were active in that pool.

<goes and looks at the thread history>

Now that I've had a squizz I see that your filedevc is associated with the ts3100 (lto) library. I don't think that's a go-er to be frank...I've got a test rig that I'm looking to put into production in the next month or so using sequential file type devices and it uses a seperate library completely - at definition time TSM went off and automagically created the additional library and virtual drives. Whether you can do that manually or not I don't know...

In essence then I think your in a wee bit of strife. You've got a catch-22...there is data to migrate but the config infrastructure to migrate it is out of whack. I'm going to have to defer to someone with greater knowledge of sequential disk devices - if this were a tape based device I'd be checking out all of the carts, removing the library/drives completely then redefining and checking back in. No idea if there's an equivalent for a seq file class config.

Good luck,

Tony

P.S. Maybe the whole shebang when out of whack when you modified the mountlim?


P.P.S.

Hmm I might be leading you up the garden path a bit here - in my test rig I'm sharing the file volumes with stgagents, so I defined it all as a shared config - a quick RTFM indicates that only in this circumstance will the additional virtual lib/drives get created.

Are you able to figure out how many sessions are active against the seq file class? It might be worth dropping the number of client sessions to, say, 10% below the mount limit for the dev class (factoring in maxnummp for clients as well) to ensure you have enough overhead to handle the migration...
 
Last edited:
Jesus, Hail Mary it's working ( Sorry I'm a bit Irish)

Well it has moved further this time without complaining about mount points.


tsm: TSM_PROD01>q proc
ANR2017I Administrator ADMIN issued command: QUERY PROCESS
Process Process Description Status
Number
-------- -------------------- -------------------------------------------------
65 Migration Volume /opt/tivoli/tsm/stg01/0000CAC7.BFS
(storage pool FILE_STG01), Moved Files: 0, Moved
Bytes: 0, Unreadable Files: 0, Unreadable Bytes:
0. Current Physical File (bytes): 7,910,893,918
Current output volume: 126ABWL4.
tsm: TSM_PROD01>mv: failed to preserve ownership for `./0000cdb3.bfs': Permission denied
ANR1121E Migration is ended for volume /opt/tivoli/tsm/stg01/0000CAC7.BFS - storage. Media is inaccessible.
ANR1021W Migration process 65 terminated for storage pool FILE_STG01 - storage media inaccessible.
ANR0985I Process 65 for MIGRATION running in the FOREGROUND completed with completion state FAILURE at 02:38:28 PM.
ANR0514I Session 22 closed volume 126ABWL4.
ANR4935I Migration of primary storage pool FILE_STG01 has ended. Files migrated: 0, Bytes migrated: 0, Unreadable Files: 0.
ANR2017I Administrator ADMIN issued command: ROLLBACK





What appears in your server activity log when you try to do the backup and it fails?

How are things working now you've set the copypool to nothing?

How many volumes do you have in the file pool? Why are they all 2GB if you have a really large number? It may be related to that.

Can you post the output of "q mount".

Halting the server and restarting it will clear any mounts that have hung for some reason.
 
I'm really sorry to spam you with activity logs - but just trying to gather all the steps which have been taken also.


01/06/2010 14:06:27 ANR2017I Administrator ADMIN issued command: select
library_name, drive_state, count(drive_state) from drives
group by library_name, drive_state (SESSION: 22)
01/06/2010 14:08:28 ANR2017I Administrator ADMIN issued command: MIGRATE
STGPOOL file_stg01 lo=0 wait=yes (SESSION: 22)
01/06/2010 14:08:28 ANR0984I Process 64 for MIGRATION started in the
FOREGROUND at 02:08:28 PM. (SESSION: 22, PROCESS: 64)
01/06/2010 14:08:28 ANR2110I MIGRATE STGPOOL started as process 64. (SESSION:
22, PROCESS: 64)
01/06/2010 14:08:28 ANR1000I Migration process 64 started for storage pool
FILE_STG01 manually, highMig=90, lowMig=0, duration=None.
(SESSION: 22, PROCESS: 64)
01/06/2010 14:08:28 ANR1100I Migration started for volume
/opt/tivoli/tsm/stg01/0000CAC7.BFS, storage pool
FILE_STG01 (process number 64). (SESSION: 22, PROCESS:
64)
01/06/2010 14:08:28 ANR1102I Removable volume /opt/tivoli/tsm/stg01/0000CAC6.-
BFS is required for migration. (SESSION: 22, PROCESS: 64)
01/06/2010 14:08:28 ANR1102I Removable volume /opt/tivoli/tsm/stg01/0000CAC7.-
BFS is required for migration. (SESSION: 22, PROCESS: 64)
01/06/2010 14:08:28 ANR1102I Removable volume /opt/tivoli/tsm/stg01/0000CAC8.-
BFS is required for migration. (SESSION: 22, PROCESS: 64)
01/06/2010 14:08:28 ANR1102I Removable volume /opt/tivoli/tsm/stg01/0000CAC9.-
BFS is required for migration. (SESSION: 22, PROCESS: 64)
01/06/2010 14:08:28 ANR1102I Removable volume /opt/tivoli/tsm/stg01/0000CACA.-
BFS is required for migration. (SESSION: 22, PROCESS: 64)
more... (<ENTER> to continue, 'C' to cancel)
01/06/2010 14:08:28 ANR1122E Migration is ended for volume
/opt/tivoli/tsm/stg01/0000CAC7.BFS. An insufficient
number of mount points are available for removable media.
(SESSION: 22, PROCESS: 64)
01/06/2010 14:08:28 ANR1134E Migration is ended for storage pool FILE_STG01.
There is an insufficient number of mount points available
for removable media. (SESSION: 22, PROCESS: 64)
01/06/2010 14:08:28 ANR0985I Process 64 for MIGRATION running in the
FOREGROUND completed with completion state FAILURE at
02:08:28 PM. (SESSION: 22, PROCESS: 64)
01/06/2010 14:08:28 ANR4935I Migration of primary storage pool FILE_STG01 has
ended. Files migrated: 0, Bytes migrated: 0, Unreadable
Files: 0. (SESSION: 22)



01/06/2010 14:23:23 ANR2017I Administrator ADMIN issued command: QUERY
DEVCLASS f=d (SESSION: 22)

01/06/2010 14:26:03 ANR2017I Administrator ADMIN issued command: UPDATE
DEVCLASS filedevc library= (SESSION: 22)
01/06/2010 14:26:03 ANR2205I Device class FILEDEVC updated. (SESSION: 22)
01/06/2010 14:26:05 ANR2017I Administrator ADMIN issued command: QUERY
DEVCLASS f=d (SESSION: 22)
01/06/2010 14:26:18 ANR2017I Administrator ADMIN issued command: QUERY
DEVCLASS filedevc f=d (SESSION: 22)
01/06/2010 14:26:34 ANR2017I Administrator ADMIN issued command: UPDATE
DEVCLASS filedevc mountl=80 (SESSION: 22)
01/06/2010 14:26:34 ANR2205I Device class FILEDEVC updated. (SESSION: 22)
01/06/2010 14:26:35 ANR2017I Administrator ADMIN issued command: QUERY
DEVCLASS filedevc f=d (SESSION: 22)


01/06/2010 14:37:16 ANR2017I Administrator ADMIN issued command: MIGRATE
STGPOOL file_stg01 lo=0 wait=yes (SESSION: 22)
01/06/2010 14:37:16 ANR0984I Process 65 for MIGRATION started in the
FOREGROUND at 02:37:16 PM. (SESSION: 22, PROCESS: 65)
01/06/2010 14:37:16 ANR2110I MIGRATE STGPOOL started as process 65. (SESSION:
22, PROCESS: 65)
01/06/2010 14:37:16 ANR1000I Migration process 65 started for storage pool
FILE_STG01 manually, highMig=90, lowMig=0, duration=None.
(SESSION: 22, PROCESS: 65)
01/06/2010 14:37:16 ANR1100I Migration started for volume
/opt/tivoli/tsm/stg01/0000CAC7.BFS, storage pool
FILE_STG01 (process number 65). (SESSION: 22, PROCESS:
65)
01/06/2010 14:37:16 ANR1102I Removable volume /opt/tivoli/tsm/stg01/0000CAC6.-
BFS is required for migration. (SESSION: 22, PROCESS: 65)
01/06/2010 14:37:16 ANR1102I Removable volume /opt/tivoli/tsm/stg01/0000CAC7.-
BFS is required for migration. (SESSION: 22, PROCESS: 65)
01/06/2010 14:37:16 ANR1102I Removable volume /opt/tivoli/tsm/stg01/0000CAC8.-
BFS is required for migration. (SESSION: 22, PROCESS: 65)
more... (<ENTER> to continue, 'C' to cancel)
01/06/2010 14:37:16 ANR1102I Removable volume /opt/tivoli/tsm/stg01/0000CAC9.-
BFS is required for migration. (SESSION: 22, PROCESS: 65)
01/06/2010 14:37:16 ANR1102I Removable volume /opt/tivoli/tsm/stg01/0000CACA.-
BFS is required for migration. (SESSION: 22, PROCESS: 65)
01/06/2010 14:37:16 ANR1401W Mount request denied for volume
/opt/tivoli/tsm/stg01/0000CAC6.BFS - mount failed.
(SESSION: 22, PROCESS: 65)
01/06/2010 14:37:56 ANR8337I LTO volume 126ABWL4 mounted in drive DRIVE2
(/dev/IBMtape1). (SESSION: 22, PROCESS: 65)
01/06/2010 14:37:57 ANR0513I Process 65 opened output volume 126ABWL4.
01/06/2010 14:38:28 ANR1121E Migration is ended for volume

/opt/tivoli/tsm/stg01/0000CAC7.BFS - storage. Media is
inaccessible. (SESSION: 22, PROCESS: 65)
01/06/2010 14:38:28 ANR1021W Migration process 65 terminated for storage pool
FILE_STG01 - storage media inaccessible. (SESSION: 22,
PROCESS: 65)
01/06/2010 14:38:28 ANR0985I Process 65 for MIGRATION running in the
FOREGROUND completed with completion state FAILURE at
02:38:28 PM. (SESSION: 22, PROCESS: 65)
01/06/2010 14:38:28 ANR0514I Session 22 closed volume 126ABWL4. (SESSION: 22)
01/06/2010 14:38:28 ANR4935I Migration of primary storage pool FILE_STG01 has
ended. Files migrated: 0, Bytes migrated: 0, Unreadable
Files: 0. (SESSION: 22)

01/06/2010 14:39:29 ANR8336I Verifying label of LTO volume 126ABWL4 in drive
DRIVE2 (/dev/IBMtape1). (SESSION: 22, PROCESS: 65)
01/06/2010 14:40:04 ANR0405I Session 24 ended for administrator ADMIN
(Linux86). (SESSION: 24)
 
What did you change? Do a restart?

Are you running the TSM server as the normal user who starts it (ie root?). You are getting permission problems there now. Check perms on the files and make sure you're running the server as the right user.
 
Hmm - ok that's a wee bit confusing...

The pool you're trying to migrate has a sequential access strategy, but there's no separate library or drives associated with it. Admittedly I've not used sequential disk pools much, but I was expecting to see some output from the SQL showing how many drives were active in that pool.

<goes and looks at the thread history>

Now that I've had a squizz I see that your filedevc is associated with the ts3100 (lto) library. I don't think that's a go-er to be frank...I've got a test rig that I'm looking to put into production in the next month or so using sequential file type devices and it uses a seperate library completely - at definition time TSM went off and automagically created the additional library and virtual drives. Whether you can do that manually or not I don't know...

In essence then I think your in a wee bit of strife. You've got a catch-22...there is data to migrate but the config infrastructure to migrate it is out of whack. I'm going to have to defer to someone with greater knowledge of sequential disk devices - if this were a tape based device I'd be checking out all of the carts, removing the library/drives completely then redefining and checking back in. No idea if there's an equivalent for a seq file class config.

Good luck,

Tony

P.S. Maybe the whole shebang when out of whack when you modified the mountlim?


P.P.S.

Hmm I might be leading you up the garden path a bit here - in my test rig I'm sharing the file volumes with stgagents, so I defined it all as a shared config - a quick RTFM indicates that only in this circumstance will the additional virtual lib/drives get created.

Are you able to figure out how many sessions are active against the seq file class? It might be worth dropping the number of client sessions to, say, 10% below the mount limit for the dev class (factoring in maxnummp for clients as well) to ensure you have enough overhead to handle the migration...

........................................................................................................................

No I don't think you are leading me up the garden path - Any path is good at the moment.

I removed that Library=TS3100 and hey presto it has stopped complaining about mount points. So that is good.

It did fail but for other reasons un-be known to me at present.
 
It seems to be moving forward after this change

UPDATE DEVCLASS filedevc library=

Although not quite there yet.

But makes sense mount request denied as they are not there at present. Just moving them back in now.

And crossing fingers legs and toes.


What did you change? Do a restart?

Are you running the TSM server as the normal user who starts it (ie root?). You are getting permission problems there now. Check perms on the files and make sure you're running the server as the right user.

Yeah good idea, I will halt the server and restart DSMSERV

Checking the perms on the files and they are all owned by Root - which I believe they always were.

Although I am being naughty and copying all these files off onto a NAS device for a backup before I blew them all away.

Only 3 left at the mo

ll /opt/tivoli/tsm/stg01 | wc -l
3
[2]+ Done mv -i /opt/tivoli/tsm/stg01/*.bfs .
[root@mp-man-l-p1 Colin_Dont_delete]# ll /opt/tivoli/tsm/stg01 | wc -l
3

Will have to move them back now it seems to be working again.
 
Last edited:
Cool, sounds promising. Glad I had a look from home (mostly on holiday at the moment)...it was going to prey on my mind (sad, huh). Must be something to do with the hack used to share out seq file volumes...a bit of an iffy architectural decision imho, deciding to create a lib+drives when the volumes are shared, but without that config when they're only used internally. With that library defined in the dev class the TSM code must have been looking for virtual drives. T
 
Back
Top