How to recover disk space

chenao

ADSM.ORG Member
Joined
Apr 30, 2008
Messages
26
Reaction score
0
Points
0
Location
Chilliwack BC Canada
I defined expiration for the data backed up under policy Domain, but I see that Disks on the server are almost full, I tried to run the EXPIRE INVENTORY command but any process is found when I run a query process, I have a 2.0 T disk at 100% used with the /storage/tivoli, a 32 G disk at 98 % used with PrimaryDB and a 1.9 T disk at 66% used with data, how can I recover spaces from the sdisk, Do I have to change the Policy domain?.. In the dsmserv.opt file I have not EXPInterval defined, Do I have to do it ? ... Thanks in advance for any suggestion
 
TSM doesn't use disk space on it's own, you have to tell it to via scratch FILE volumes. Are you doing that? What exactly is using your space?
 
using ISC I defined two managment classes, the one used as default keeps 3 versions, the last file version is kept 45 days , I created 2 data pools but I need to create 3 more, under maintenance script Expiration has the options generate script section checked, Directory objects are expired yes, and 480 minutes to cancel the process. Do I have to uncheck theese options and run the Epiration command from the command line again... Thanks
 
So how does that have anything to do with your filesystem being almost full?

If you don't know the answer to this, you really need to think about figuring it out, or finding out something else to do. If you just aren't saying that you set up TSM to dynamically use triggers to create new volumes, well... Say it so we know.

Sorry if I'm mis-reading your post, but TSM data storage usually has very little to do with your filesystems filling up on the server machine.
 
under the TSM_PrimaryDB there are 4 metadata.vol(1,2,3,4) files and it is 98% full
under tivoli there are 60 backuppool.vol(1000 to 11F) files and is 100% of 2 T. Those are the two disk I am really worried about.
Any suggestion
Thanks
 
Last edited:
That could be perfectly normal. Tell us why it isn't normal.

It's normal to use up nearly 100% of a filesystem with TSM volumes because they are defined volumes, not dynamic. Unless you are doing something dynamic such as triggering new volumes to be created, or using scratch FILE volumes.

But you haven't answered if you are doing that yet, so I'm not even convinced you have a problem.
 
  • Like
Reactions: BBB
Thanks Wildwest, I think you are right, every thing seem to be ok , My concern started when runing the df command but it has to be somethig related with space definition in the db settings. I will check the settings.
I appreciate your help
 
To delete the disk volumes which house database backups you need to use the del volhist type=dbb todate=whichever date is appropriate. If you use DRM then it gets a bit more complicated but results in using Set DRMDBBACKUPEXPIREDAYS.

The file system is under the storage pool so it can be 100% full even if the storage pool has available space; defined scratch volumes take up space in the file system and can max out the space in the file system. For example, if you have 100 GB volumes in a 10 TB disk pool with devclass type of file and you have 100 volumes defined, some scratch and some containing data then the filesystem will report as full from the OS, but that doesn't mean the storage pool is full assuming there are scratch volumes available.

If you start deleting volumes (including scratch) from the storage pool then the file system will report freed up space at the OS level. The trigger to create new scratch volumes assumes sufficient space in the Storage Pool and thus the file system underneath it and is not the same as a space trigger which automatically increases space in a file system.
 
Disk Space

Thanks ypcat, in fact, it was the space allocated in the dbase backup definition. By the way, when I look under database properties, I see the percentage changed since last dbase backup is 45,6 % even though the db backup runs every day and this information I checked immediatly after the backup finish, and the tape shows the las modification date correctly. Could it mean that the db backup is not running or why this % is so high
 
If you have sessions that have bad connections or are interrupted due to something happening on the client you will end up with data in the log that is held from the commit so the log can roll back. These sessions 'pin the log' and you can identify them by running the undocumented command:

show logpinned

I have almost 300 nodes all of which are non-local (we're an SSP) so I get this problem quite a bit and I've written an elaborate script to check for sessions which are pinning the log, and when a session is found I cancel the session and record the event to a log which is automatically mailed to myself every 24 hours. You might want to consider something similar.

Cancelling the offending session will free up the log space. You should then investigate the node causing the issue.
 
Back
Top