Upgrade TSM 5.5.0.0 to 5.5.4.0

I'm still having issues with the database cache pool ratio. Today the status was 97.8 < 98.

The server has 4GB of physical memory. If I do a q sys f=d on the server, it states that Bufpoolsize=1048576 K (1GB). However, if I open the dsmserv.opt file, it states BUFPoolsize=524288 K.

Based on the dsmserv.opt settings, I was going to increase the size from 12.5% (524288 K) to 15% ( [FONT=&quot]629145 K)[/FONT]

But when I attempted to set the Bufpoolsize using the setopt option, I get the error: ANR0385I Could not free sufficient buffers to reach reduced BUFPoolsize.

My assumption here is that the TSM server is using the 1GB setting for the Bufpoolsize, however, I thought it was set in the dsmserv.opt file for 524288 K. Any ideas on this particular issue?

[FONT=&quot][/FONT]
 
You might find that there are multiple BufPoolSize configuration lines in your dsmserv.opt - when you change the parameter dynamically the server appends a new row to your opt file...check at the bottom (the last row wins)...
 
Thanks, TonyB. You were right. I did a search thru the whole dsmserver.opt file and, guess what? I found about 5 different entries for the BufPoolSize All at the very bottom of the .opt file. Of course the last one listed was the 1GB size setting. I'm gonna have to hunt someone down....:roll:

Still does not solve the issue we are having with the database cache hits ratio. I was hoping by increasing it to 15% of total memory, it would address the issue. However, it appears it has been increased well above the 15% recommended setting. Would this also create issues with database cache buffer hits going below the threshhold of 98%?
 
Try lowering the number to something like 262144. Remember that ratio changes throughout the day. Check it at after daily processing.
 
Sorry I didn't notice you had an earlier question that I don't think has been dealt with...

You can see the bufferpool stats via a detailed db query (q db f=d)... The "cache hit pct" is the one that your op reporting is complaining about. The more serious stat is the "cache wait pct" (tasks blocking until bufpool pages can be freed).

The required size of the bufpool is entirely dependant on server activity. There will be a number of tasks which generate database reads. Most backup processes will generate read (during the scanning phase where the client is compiling a list of objects which should be sent to the server); administrative processes will generate read as they (for example) work out what objects have already been copied to secondary storage pools; and administrators themselves can generate read by querying database tables (a select on the backups table or a query of the contents of a tape for example).

The pool stats tend to reset after expiration processing as that task hits a large chunk of the database (i.e. it was deemed to be a reliable point of comparison by the devs).

It may be that you could reduce the ad-hoc or even automated "self-inflicted" database I/O by reducing the number of admin commands you use to hit the database.

Its entirely possible that your database size and content (i.e. the number of inactive objects to be processed, and the daily volatility from new backup objects being transmitted) simply warrants a larger bufpool (and perhaps more physical memory).

Try running a scheduled "q db", collecting the hit ratio more frequently than once per day (i.e. more often than op reporting). That might give you an idea of what is causing the hit percentage issue (i.e. is it during the backup window etc).
 
Back
Top