• Please help support our sponsors by considering their products and services.
    Our sponsors enable us to serve you with this high-speed Internet connection and fast webservers you are currently using at ADSM.ORG.
    They support this free flow of information and knowledge exchange service at no cost to you.

    Please welcome our latest sponsor Tectrade . We can show our appreciation by learning more about Tectrade Solutions
  • Community Tip: Please Give Thanks to Those Sharing Their Knowledge.

    If you receive helpful answer on this forum, please show thanks to the poster by clicking "LIKE" link for the answer that you found helpful.

  • Community Tip: Forum Rules (PLEASE CLICK HERE TO READ BEFORE POSTING)

    Click the link above to access ADSM.ORG Acceptable Use Policy and forum rules which should be observed when using this website. Violators may be banned from this website. This notice will disappear after you have made at least 3 posts.

stgpooldir 100% utilized

mkraker

ADSM.ORG Member
Joined
Feb 9, 2011
Messages
20
Reaction score
1
Points
0
Goodday

I have 6 containerpool dirs at 100% (on my replication server)

name#/spcont/SPcont03>:df -g|grep spcont

/dev/contlv00 10232.00 0.00 100% 1052 83% /spcont/SPcont00
/dev/contlv01 10232.00 0.00 100% 1058 85% /spcont/SPcont01
/dev/contlv02 10232.00 0.00 100% 1053 87% /spcont/SPcont02
/dev/contlv03 10232.00 0.00 100% 1073 82% /spcont/SPcont03
/dev/contlv04 10232.00 0.00 100% 1060 90% /spcont/SPcont04
/dev/contlv05 10232.00 0.00 100% 1046 89% /spcont/SPcont05

/dev/contlv06 10232.00 3615.75 65% 691 1% /spcont/SPcont06
/dev/contlv07 10232.00 5918.03 43% 446 1% /spcont/SPcont07
/dev/contlv08 10232.00 7118.38 31% 323 1% /spcont/SPcont08
/dev/contlv09 10232.00 8608.23 16% 176 1% /spcont/SPcont09

Those at 100% were created initially and the other 4 later. Is this normal behaviour and need I be worried?
Is there someway I can rebalance the full dirs ?

Also I see that my replication stgpool is bigger than the primary. (expiration is running fine on replication server)
Also the DB2 database is bigger on replication server and is also every day getting more bigger than the primary db2 database.

Also in AIX I got the message that the FS was full.

Any clearance appreciated.


Kind regards

Michel.
 

marclant

ADSM.ORG Moderator
Joined
Jun 16, 2006
Messages
3,739
Reaction score
619
Points
0
Location
Canada
Website
www.ibm.com
For performance, it's better if that was spread more evenly. This will happen naturally over time with the automatic move containers that are running, but you could speed that up by doing some manual move containers. Put the 6 fulls read only and do some move containers to move them in the other 4 until all 10 are closer to the same occupation.

In the future, assuming your performance is good, it might be better to extend those 10 filesystems rather than adding more filesystems. That way it doesn't create this imbalance. What's happening right now is that most of the new writes will be directed to the 4 new filesystems instead of the 6 fulls, so you don't have yet the benefits of spreading the workload across 10 filesystems.

For the target being larger than the source, for expiration on the target to work, replication must complete successfully for every node. So ensure that those 2 can complete successfully. If both are completing successfully daily, then you might need to open a case with IBM to look into this.
 

marclant

ADSM.ORG Moderator
Joined
Jun 16, 2006
Messages
3,739
Reaction score
619
Points
0
Location
Canada
Website
www.ibm.com
See attachment. 100% utilisation looks ok , it show some GB's free
That means that the filesystem if full of containers, but the containers themselves are not all full and there's still place to write new data within existing containers.
 

RecoveryOne

ADSM.ORG Senior Member
Joined
Mar 15, 2017
Messages
327
Reaction score
75
Points
0
Late to the party, but I have seen the same thing. if you do a q container f=d you can see how much container space is free per container. I had to add filesystems to the directory storage pool later on as well before I got my new array. This is normal, AIX will complain and TSM will keep on doing what it needs to do. Might see some warnings/errors in actlog however.

As to moving things around, this should help a bit:
I just did a 'q container > out.txt' cleaned up out.txt to only include say 50% of /tsmstg01 and 50% of /tsmstg02 and other directories you want to 'clean up'

Might want to set the directories you are moving from to read only, or there is a chance that containers on /tsmstg02 containers might go to /tsmstg01

Just remember to set them back to read write. Check out 'help UPDate STGPOOLDIRectory'

Anyhow, here's the quick and dirty script that I've used in the past:
Code:
#!/bin/ksh
# file file
file=/home/<usrename>/scripts/out.txt

#Global Variables
TSMADMIN=<admin id>
TSMSE=<server id>
TSMPA=<password>

# Function
tsmcmd()
{
    dsmadmc -se=${TSMSE} -id=${TSMADMIN} -pa=${TSMPA} -tab -dataonly=yes "$*"
}


while read vol
do
    tsmcmd "move container $vol wait=yes"

done <"$file"
Run it in a background, or screen/tmux and come back after a while. Last time I did this, I ended up moving about 350ish container files that way.
 

dietmar

ADSM.ORG Member
Joined
Mar 12, 2019
Messages
21
Reaction score
3
Points
0
select count(*) from containers where container_name like '%SPcont00%' 01 02 03 04 05
-> check how many containers to know how many container u want to move
select container_name from containers where container_name like '%SPcont00%' 01 02 03 04 05

You could even play arround with the select where free_space_mb>'xxxxx' to move only those Containers which have free Space ....

Take a list of those containers and modify it like this :

move container "containername" defrag=yes wait=no ( ever 10 Containers in the list a wait=yes )

set the 5 container directories to read-only
run the script macro ( macro c:\script.xxx )
set the 5 container directories to read-write

a 5030 could do 3.0GB/s thruput on a move container. So 10 ran in parallel is no problem ... ( 2x 16GBit FC )

of course all done in non production/working time .....

br, Dietmar
 

RecoveryOne

ADSM.ORG Senior Member
Joined
Mar 15, 2017
Messages
327
Reaction score
75
Points
0
select count(*) from containers where container_name like '%SPcont00%' 01 02 03 04 05
-> check how many containers to know how many container u want to move
select container_name from containers where container_name like '%SPcont00%' 01 02 03 04 05

You could even play arround with the select where free_space_mb>'xxxxx' to move only those Containers which have free Space ....

Take a list of those containers and modify it like this :

move container "containername" defrag=yes wait=no ( ever 10 Containers in the list a wait=yes )

set the 5 container directories to read-only
run the script macro ( macro c:\script.xxx )
set the 5 container directories to read-write

a 5030 could do 3.0GB/s thruput on a move container. So 10 ran in parallel is no problem ... ( 2x 16GBit FC )

of course all done in non production/working time .....

br, Dietmar
Very nice!
The above post I made was back when containers were first introduced in v7, just haven't had the need to poke further! Don't think there was a defrag option then. Too long ago for my memory recall device.
I like the way you've posted.
 

mkraker

ADSM.ORG Member
Joined
Feb 9, 2011
Messages
20
Reaction score
1
Points
0
Thx all.

this is got back from IBM Support. Planning this for the coming days. But it looks like your suggestions.

db2 connect to tsmdb1
db2 set schema tsmdb1

db2 -x "select 'MOVE CONTAINER ' || sdc.cntrname || ' W=Y' from sd_containers sdc where exists (select 1 from sd_fragmented_containers sdfcn where sdfcn.cntrid=sdc.cntrid) order by sdc.freespace desc fetch first 20 rows only for read only with ur" > moveTop20.mac

Then execute the macro:

dsmadmc -id=**** -password=**** macro moveTop20.mac
 

Advertise at ADSM.ORG

If you are reading this, so are your potential customer. Advertise at ADSM.ORG right now.

DigitalOcean $100 Credit

Support ADSM.ORG and get DigitalOcean FREE credit. DigitalOcean currently offer a $100, 60-day Free Credit for new accounts. Sign-up here:

DigitalOcean Referral Badge

The Spectrum Protect TLA (Three-Letter Acronym): ISP or something else?

  • Every product needs a TLA, Let's call it ISP (IBM Spectrum Protect).

    Votes: 20 18.7%
  • Keep using TSM for Spectrum Protect.

    Votes: 65 60.7%
  • Let's be formal and just say Spectrum Protect

    Votes: 13 12.1%
  • Other (please comement)

    Votes: 9 8.4%

Forum statistics

Threads
31,885
Messages
135,936
Members
21,783
Latest member
london
Top