ADSM-L

Re: BUFPoolSize - what am I missing out on?

2002-08-06 10:19:49
Subject: Re: BUFPoolSize - what am I missing out on?
From: "MC Matt Cooper (2838)" <Matt.Cooper AT AMGREETINGS DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Tue, 6 Aug 2002 10:18:43 -0400
Denis,
        I am running TSM 4.1.5 on z/OS 1.1.   I set selftunebufpoolsize on.
The system may grab more real memory when it is available but it wasn't
impacted anything else.   I put TSM in a 'STCMED" service  class which puts
it ahead of test stuff but not in front of the most important batch.  I
notice the amount of real storage in use to TSM goes down significantly as
the morning workload starts to ramp up.
Matt

-----Original Message-----
From: L'Huillier, Denis [mailto:DLHuilli AT EXCHANGE.ML DOT COM]
Sent: Tuesday, August 06, 2002 10:07 AM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: BUFPoolSize - what am I missing out on?

Hello,
ENV: OS/390 TSM 4.1

I have a question about Bufpoolsize..  From what I read it should be at
least 98%.
What is the impact if it's below this?  I'm currently at about 94% and 214MB
Bufpoolsize.
All my backups complete within the backup window (over 1200 nodes).  I'm all
for getting it to
98% but I need some justification as to how having a bufpoolsize below 98%
is impacting performance.

Also, while I'm on the topic..
What about selftunebufpoolsize? I'm running on OS/390 and don't want to take
all the available
memory by setting it to 'YES'.  Does anybody know what increments
selftunebufpoolsize uses each time it
Makes an adjustment?
Is selftunebufpoolsize recommended for a 390 Env?

Is there any type of formula I can apply to the size of my database and
calculate what my Bufpoolsize should be (ballpark)?
I've already increased it from 128000 to 214000 and made a whopping jump
from 93% to 94%.

Anybody want to share their bufpoolsize for their database of approx. 30GB
(used pages)?

Anybody have some experience where increasing the Bufpoolsize helped a
performance or any issue?

Thanks!
-Denis

<Prev in Thread] Current Thread [Next in Thread>