stgpooldir 100% utilized

mkraker

ADSM.ORG Member
Joined
Feb 9, 2011
Messages
20
Reaction score
1
Points
0
PREDATAR Control23

Goodday

I have 6 containerpool dirs at 100% (on my replication server)

name#/spcont/SPcont03>:df -g|grep spcont

/dev/contlv00 10232.00 0.00 100% 1052 83% /spcont/SPcont00
/dev/contlv01 10232.00 0.00 100% 1058 85% /spcont/SPcont01
/dev/contlv02 10232.00 0.00 100% 1053 87% /spcont/SPcont02
/dev/contlv03 10232.00 0.00 100% 1073 82% /spcont/SPcont03
/dev/contlv04 10232.00 0.00 100% 1060 90% /spcont/SPcont04
/dev/contlv05 10232.00 0.00 100% 1046 89% /spcont/SPcont05

/dev/contlv06 10232.00 3615.75 65% 691 1% /spcont/SPcont06
/dev/contlv07 10232.00 5918.03 43% 446 1% /spcont/SPcont07
/dev/contlv08 10232.00 7118.38 31% 323 1% /spcont/SPcont08
/dev/contlv09 10232.00 8608.23 16% 176 1% /spcont/SPcont09

Those at 100% were created initially and the other 4 later. Is this normal behaviour and need I be worried?
Is there someway I can rebalance the full dirs ?

Also I see that my replication stgpool is bigger than the primary. (expiration is running fine on replication server)
Also the DB2 database is bigger on replication server and is also every day getting more bigger than the primary db2 database.

Also in AIX I got the message that the FS was full.

Any clearance appreciated.


Kind regards

Michel.
 
PREDATAR Control23

See attachment. 100% utilisation looks ok , it show some GB's free
 

Attachments

  • Screenshot 2021-06-18 at 10.31.14.png
    Screenshot 2021-06-18 at 10.31.14.png
    354.7 KB · Views: 12
PREDATAR Control23

For performance, it's better if that was spread more evenly. This will happen naturally over time with the automatic move containers that are running, but you could speed that up by doing some manual move containers. Put the 6 fulls read only and do some move containers to move them in the other 4 until all 10 are closer to the same occupation.

In the future, assuming your performance is good, it might be better to extend those 10 filesystems rather than adding more filesystems. That way it doesn't create this imbalance. What's happening right now is that most of the new writes will be directed to the 4 new filesystems instead of the 6 fulls, so you don't have yet the benefits of spreading the workload across 10 filesystems.

For the target being larger than the source, for expiration on the target to work, replication must complete successfully for every node. So ensure that those 2 can complete successfully. If both are completing successfully daily, then you might need to open a case with IBM to look into this.
 
PREDATAR Control23

See attachment. 100% utilisation looks ok , it show some GB's free
That means that the filesystem if full of containers, but the containers themselves are not all full and there's still place to write new data within existing containers.
 
PREDATAR Control23

Late to the party, but I have seen the same thing. if you do a q container f=d you can see how much container space is free per container. I had to add filesystems to the directory storage pool later on as well before I got my new array. This is normal, AIX will complain and TSM will keep on doing what it needs to do. Might see some warnings/errors in actlog however.

As to moving things around, this should help a bit:
I just did a 'q container > out.txt' cleaned up out.txt to only include say 50% of /tsmstg01 and 50% of /tsmstg02 and other directories you want to 'clean up'

Might want to set the directories you are moving from to read only, or there is a chance that containers on /tsmstg02 containers might go to /tsmstg01

Just remember to set them back to read write. Check out 'help UPDate STGPOOLDIRectory'

Anyhow, here's the quick and dirty script that I've used in the past:
Code:
#!/bin/ksh
# file file
file=/home/<usrename>/scripts/out.txt

#Global Variables
TSMADMIN=<admin id>
TSMSE=<server id>
TSMPA=<password>

# Function
tsmcmd()
{
    dsmadmc -se=${TSMSE} -id=${TSMADMIN} -pa=${TSMPA} -tab -dataonly=yes "$*"
}


while read vol
do
    tsmcmd "move container $vol wait=yes"

done <"$file"

Run it in a background, or screen/tmux and come back after a while. Last time I did this, I ended up moving about 350ish container files that way.
 
PREDATAR Control23

select count(*) from containers where container_name like '%SPcont00%' 01 02 03 04 05
-> check how many containers to know how many container u want to move
select container_name from containers where container_name like '%SPcont00%' 01 02 03 04 05

You could even play arround with the select where free_space_mb>'xxxxx' to move only those Containers which have free Space ....

Take a list of those containers and modify it like this :

move container "containername" defrag=yes wait=no ( ever 10 Containers in the list a wait=yes )

set the 5 container directories to read-only
run the script macro ( macro c:\script.xxx )
set the 5 container directories to read-write

a 5030 could do 3.0GB/s thruput on a move container. So 10 ran in parallel is no problem ... ( 2x 16GBit FC )

of course all done in non production/working time .....

br, Dietmar
 
PREDATAR Control23

select count(*) from containers where container_name like '%SPcont00%' 01 02 03 04 05
-> check how many containers to know how many container u want to move
select container_name from containers where container_name like '%SPcont00%' 01 02 03 04 05

You could even play arround with the select where free_space_mb>'xxxxx' to move only those Containers which have free Space ....

Take a list of those containers and modify it like this :

move container "containername" defrag=yes wait=no ( ever 10 Containers in the list a wait=yes )

set the 5 container directories to read-only
run the script macro ( macro c:\script.xxx )
set the 5 container directories to read-write

a 5030 could do 3.0GB/s thruput on a move container. So 10 ran in parallel is no problem ... ( 2x 16GBit FC )

of course all done in non production/working time .....

br, Dietmar
Very nice!
The above post I made was back when containers were first introduced in v7, just haven't had the need to poke further! Don't think there was a defrag option then. Too long ago for my memory recall device.
I like the way you've posted.
 
PREDATAR Control23

Thx all.

this is got back from IBM Support. Planning this for the coming days. But it looks like your suggestions.

db2 connect to tsmdb1
db2 set schema tsmdb1

db2 -x "select 'MOVE CONTAINER ' || sdc.cntrname || ' W=Y' from sd_containers sdc where exists (select 1 from sd_fragmented_containers sdfcn where sdfcn.cntrid=sdc.cntrid) order by sdc.freespace desc fetch first 20 rows only for read only with ur" > moveTop20.mac

Then execute the macro:

dsmadmc -id=**** -password=**** macro moveTop20.mac
 
Top