Cleanup after Protect stgpool

StiffBoard

ADSM.ORG Senior Member
Joined
Mar 1, 2014
Messages
55
Reaction score
3
Points
0
PREDATAR Control23

Been running both protect stgpool and replicate node between my two servers but I'm running out of space. So I thought I'd skip some servers from replication. I've removed the protect storage pool command and disabled replication on some nodes. But I wonder, will Spectrum cleanup the replicated data on the other side when I remove replication from a couple of nodes?

Also, had a Oracle TDP backed up to a storage, no replication set on that one, doesn't seem to work on Oracle TDP and virtual tapes(?!). As I no longer protect storage pool, will replicated Oracle virtual tapes be removed on the other Spectrum server?
 
PREDATAR Control23

But I wonder, will Spectrum cleanup the replicated data on the other side when I remove replication from a couple of nodes?
No, once on the target, they will be removed when they expire according to their retention policies.
 
PREDATAR Control23

If I delete a filespace from both servers and the stg no longer is protected, no Protect stg scheduled, shouldn't that release some space? Reuse delay is one day. Seems like it's growing each day and we're not adding clients since long ago, just deleting, but still it grows.
 
PREDATAR Control23

If I delete a filespace from both servers and the stg no longer is protected, no Protect stg scheduled, shouldn't that release some space?
Yes it should after the reuse delay, unless it's a dedup pool and several of the extents are shared with other nodes.


Seems like it's growing each day and we're not adding clients since long ago, just deleting, but still it grows.
Even if you are not adding clients, the clients can grow. Also, make sure expiration and reclamation (if sequential pool) are running.
 
PREDATAR Control23

Dedup is done on clients, and it is a Directory container pool on each server. Yesterday I changed Management classes and cut in half some retention periods. Freed up at least 5-6 TB looking at occupancy, but today there's even less available storage left...really don't get this.
 
PREDATAR Control23

Do you mean disk storage or storage pool percent util?
 
PREDATAR Control23

Well, both actually, as there's no limit on virtual tapes in Directory container, 54 of 58 TB used, Pct Util 95,8. That's about the same I guess. Freed up some now when 1 day has passed, not as much as I expected but that might be explained by dedup. Still really strange that we have this much compared to previous installation. Guess we'll just have to buy more storage then. Calculations that were made before new servers was far, far from accurate.
 
PREDATAR Control23

Directory container, 54 of 58 TB used, Pct Util 95,8. That's about the same I guess.
It's not the same, but they happen to be close in your case.
The space used on the filesystem is the space occupied by the containers in the directories.
The percent util is how much space in used in each container. The percent util is always lower than the percent of the disk space used because there's usually free space in several containers.

If it's higher than you anticipated, it's possible some of the data doesn't dedup well. For example compressed or encrypted data doesn't dedup well, unless you are using client-side dedup and client-side encryption. If the some of the data is already compressed or encrypted before you back it up, then there isn't much to do about that.
 
PREDATAR Control23

I hear you, but there wouldn't be very much difference between Client backup or SQL compressing backups, in the end, storage pool it would be about the same right? Only doing dedup on clients, no compression. All SQL server backups are compressed. But that's the way it's always been.
 
PREDATAR Control23

Data that doesn't dedup well takes more space though. For the clients that do client-side compression, you should do client-side dedup as well. It will take a lot less space that way.
 
PREDATAR Control23

I know this is an old discussion, but here's the solution. Because you're using directory-containers, those containers might be 50GB files, but those containers might only hold a few MB of data inside of them. You need to either upgrade to SP 8.1.5.0 or higher where containers get moved automatically; or you can query the containers for the most free space and move containers with "defrag=yes" - https://www-01.ibm.com/support/docview.wss?uid=swg27050411
 
Top