This node has exceeded its maximum number of mount points

seekerTSM

ADSM.ORG Member
Joined
Nov 4, 2019
Messages
65
Reaction score
2
Points
0
hi folks :)

seeking your assistance on this matter. i am quite confused and not familiar with TSM since i am using HP DP for almost a decade.

i am wonder why when i used inc -absoulute i had an error This node has exceeded its maximum number of mount points. but when the backup run on scheduled RUn(INCR), it completes. so i only have failed when i used inc -absolute on manual run

if i use update node <nodename> maxnummp=<value>, and i edit the default value, will the other backup/server be affected?
and is there a limit in the maxnummp? is it ok to edit the default value from 1 to 50 for example? what will be the effect on other backup/server? and is it ok to edit the maxnummp?
 
Yes you can increase MAXNUMMP. If the data is written to tape, you don't want to set it higher than the number of tapes you want this client to use concurrently. If writing to a file pool, then there's no issue.

This also work in conjunction with RESOURCEUTIL on the client. If that value is high, the client will use more sessions, which translates in more mount points. With -absolute, the workload probably justifies more mount points then with an incremental.
 
hi, it goes to disk then to tape. last backup run got the error This node has exceeded its maximum number of mount points, with 4791 error/ or failed.

what is the MAXNUMMP number for this? is it ok if i change the default value from 1 to 4800? will it affect other backup? im confused on this, i need full backup so i use inc -absolute but then i have that error.

how can i do full backup using inc -absolute with that error? how to proceed on this?
 

Attachments

  • maxnump.jpg
    maxnump.jpg
    151.7 KB · Views: 3
is it ok if i change the default value from 1 to 4800?
NO!!!!

What's the resourceutil on the client?
When you do a full (absolute), does the disk pool fill up and migration start?
What is the MAXNUMMP right now?
 
NO!!!!

What's the resourceutil on the client?
When you do a full (absolute), does the disk pool fill up and migration start?
What is the MAXNUMMP right now?


hi,

What's the resourceutil on the client?
-how to check that?
When you do a full (absolute), does the disk pool fill up and migration start?
-i believe migration starts on specific time.
What is the MAXNUMMP right now?
-default value is 1
 
really needing this, everytime we do a inc -absolute we always have that error.
 
What's the resourceutil on the client?
dsmc query option resource*

When you do a full (absolute), does the disk pool fill up and migration start?
-i believe migration starts on specific time.
What is the MAXNUMMP right now?
-default value is 1


If the resourceutil is set to 2 (default), shoudln't be using more than 1 mountpoint, if it's set higher, that could cause issues especially if migration starts during the backup, then maxnummp takes in effect because backups are now going to tape. You can increase the maxnummp to 2 as a test.
 
dsmc query option resource*




If the resourceutil is set to 2 (default), shoudln't be using more than 1 mountpoint, if it's set higher, that could cause issues especially if migration starts during the backup, then maxnummp takes in effect because backups are now going to tape. You can increase the maxnummp to 2 as a test.


hi,

on server
tsm> query option resource*
RESOURCEUTILIZATION: 2

on CLIENT
Maximum Mount Points Allowed: 3


that is the default details that i found. any approached on this?
 
You shouldn't run into maximum number of mountpoint reached with that, unless the scheduled backup runs at the same time as your manual backup.

You should stop the client scheduler when you do your manual backup to make sure both don't run at the same time.

You could also empty the disk pool with migration before you start the manual backup so that the entire backup goes to disk first instead of tape, that way you don't need to worry about mountpoints.

If you encounter exceeed mountpoints again, immidiately after it happens, check QUERY SESSION F=D and QUERY MOUNT F=D to see how many sessions that client has at that point, and how many mounts it has.
 
You shouldn't run into maximum number of mountpoint reached with that, unless the scheduled backup runs at the same time as your manual backup.
-RESOURCEUTILIZATION: 2 , mountpoint 3. so i will edit the mountpoint to 2 only?

You should stop the client scheduler when you do your manual backup to make sure both don't run at the same time.
- scheduled backup already done running by the time the backup triggered.

You could also empty the disk pool with migration before you start the manual backup so that the entire backup goes to disk first instead of tape, that way you don't need to worry about mountpoints.
-how to check the disk pool with migration?

If you encounter exceeed mountpoints again, immidiately after it happens, check QUERY SESSION F=D and QUERY MOUNT F=D to see how many sessions that client has at that point, and how many mounts it has.
-will do this in the next run.
 
any advise on how to solve this? i will not edit the mountpoint? so its ok? mountpoint is not the problem?
 
Based on the resourceutilization and maxnummp, you shoudn't be getting this error. Maybe there's other sessions for that node that kept a mountpoint.

I gave a few recommendations to troubleshoot and minimize the risk. Check number of sessions and mountpoints as soon as possible after you get the error. Run migration first on the disk pool to empty it, that will reduce the risk of overflowing to disk, therefore reducing the risk of exceeding mountpoints.
 
Based on the resourceutilization and maxnummp, you shoudn't be getting this error. Maybe there's other sessions for that node that kept a mountpoint.

I gave a few recommendations to troubleshoot and minimize the risk. Check number of sessions and mountpoints as soon as possible after you get the error. Run migration first on the disk pool to empty it, that will reduce the risk of overflowing to disk, therefore reducing the risk of exceeding mountpoints.
so i just run this, then after completed i will try to run the backup again?
tsm: TSM02>q scr A_MIGRATE

Name Description Managing profile
--------------- -------------------------------------------------- --------------------
A_MIGRATE Migrates file and disk pools to tape
 
Maybe, I don't know what your script does. If it does migration of the disk pool where your backup lands, then yes. The idea is to run migration in order to empty the pool. Note, that's not the solution to your problem, it's just to reduce the likelihood. You can find more info about migration here:
https://www.ibm.com/support/knowledgecenter/en/SSGSG7_7.1.0/com.ibm.itsm.srv.doc/t_migrate_seq.html


hi, just to be sure, what is the command to migrate disk pool?
 
hi, is space reclamation is different from migration?

1,662 Space Reclamation Offsite Volume(s) (storage pool DB2_DB_OFFSITE),
Moved Files: 3285, Moved Bytes: 356 GB,
Deduplicated Bytes: 0 bytes, Unreadable Files:
0, Unreadable Bytes: 0 bytes. Current Physical
File (bytes): 1,738 MB Current input volume:
BLR084L6. Current output volume(s): BLR144L6.
 
so migrate is what i need to do right? Migrates file and disk pools to tape. i need to migrate to disk pools to tape to solve mount point issue right?

tsm: TSM02>q scr A_MIGRATE

Name Description Managing profile
--------------- -------------------------------------------------- --------------------
A_MIGRATE Migrates file and disk pools to tape
 
Back
Top