stgpooldir size

gernthefish

ADSM.ORG Member
Joined
Mar 3, 2005
Messages
139
Reaction score
0
Points
0
We are backing up on-prem clients to a TSM server running on an AWS/EC2 instance. Most backups work fine, but a couple clients are getting failures due to ANR0522W "out of server storage space" which I assume is due to insufficient stgpooldir size. It looks like each of the clients have a couple very large files. We currently have only 200gb for stgpooldir.

Is there any alternative to adding cache space? I hesitate to do this for a few large files on just 1 or 2 clients.
 
I am not sure if it applies to you or not, but the client will reserve space on the server based on the actual size of the objects being backed up before dedup and compression. So if you have 100 GB available in the pool, and a client tries to backup a 110 GB file before compression, there would not be enough space to reserve 110 GB, even if the file would be much smaller after compression and dedup. After the backup is completed, the reserved space is released and only the actual space needed is used.

This server option may help you with that by lowering the amounts it pre-allocates:

So if you were to set that option to 2 for example, that 110 GB file would only reserve 55 GB. You don't want to go to a too high value because it may not reserve enough space, but also not too small because you would reserve more space than necessary. It should be close to your dedup ratio, maybe slightly higher.
 
Cool! We had considered enabling client dedup/compression, but this might be a better alternative. Thanks Marclant!
 
Kind of a related question - if I add space to the local cache (stgpooldir), can it fairly easily be shrunk back down later? Can the directories be removed and recreated without causing any problems?
 
Back
Top