ADSM-L

Re: Sizing of Bufpoolsize, everyone take note of this response

2003-01-17 10:37:10
Subject: Re: Sizing of Bufpoolsize, everyone take note of this response
From: "Seay, Paul" <seay_pd AT NAPTHEON DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Fri, 17 Jan 2003 10:36:10 -0500
As Zorg would say, I know the sound of this music.

The default maxperm is probably killing you.  I am guessing you are swapping
more than you are running and your swap drives are I/O hot, a iostat will
tell you, or topas.  This value dictates the amount of storage that can be
consumed by (non-computational).  The default is 80 percent.  There was a
long discussion about this on the list about 3 months ago.  The way you
change maxperm is with vmtune.  My system is a 2GB system.  I have maxperm
set to 40.  When it lists the value it gives you the answer in K, not
percent at the top and in percent at the bottom.  This is how our vmtune is
setup.  This does not survive a boot, so after we were comfortable with it,
we put it in inittab.

Our buffpoolsize is 327680.

/usr/samples/kernel/vmtune -p10 -P40
/usr/samples/kernel/vmtune -F376
/usr/samples/kernel/vmtune -R256

These changes resulted in the following:
vmtune:  current values:
  -p              -P                  -r             -R                -f
-F              -N            -W
minperm  maxperm  minpgahead maxpgahead  minfree  maxfree  pd_npages
maxrandwrt
  52424    209699               2             256             120      376
524288        0

   -M          -w        -k         -c              -b         -B
-u                    -l          -d
maxpin npswarn npskill numclust numfsbufs hd_pbuf_cnt lvm_bufcnt lrubucket
defps
419400   24576    6144       1               93          464              9
131072     1

        -s                         -n         -S                -L
-g                -h
sync_release_ilock  nokilluid  v_pinshm  lgpg_regions  lgpg_size
strict_maxperm
        0                           0          0                 0
0                0

number of valid memory pages = 524249   maxperm=40.0% of real memory
maximum pinable=80.0% of real memory    minperm=10.0% of real memory
number of file memory pages = 396986    numperm=75.7% of real memory

The other two changes for minfree and maxpgahead MUST be done in order.
What these do is significantly improve storage pool migration and database
backup speed if your disk are fast.  We have ESS, it really makes a
difference.

I subscribe to the general concensus.  Set maxperm and minperm really low on
a machine that acts only as a TSM server.

Let me put it in simple terms, you should see a 3 order of magnitude
improvement.  The system will probably run better than it ever has.  If I
remember correctly, our system performancd broke when we took the
buffpoolsize over about 96000.


Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-----Original Message-----
From: PAC Brion Arnaud [mailto:Arnaud.Brion AT PANALPINA DOT COM]
Sent: Friday, January 17, 2003 10:04 AM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Sizing of Bufpoolsize


Hi *SM fellows,

I'm running TSM 4.2.3.1 on an AIX 4.3.3 system (IBM 6h0) with 2 GB ram, the
db size is 21 GB, 75 % used, the logpool size is 12 GB. Since 3 weeks (when
we upgraded from 4.2.1.15), I get massive performance degradation : expire
inventory takes ages (approx 20 hours) to go through 9 million objects, and
cache hit ratio is between 94 and 96 %. I tried to increase my bufpoolsize
from 151 MB to 400 in several 100 GB steps without seeing any improvement,
and the system  is now heavily paging.
Could you please share your bufpoolsize settings with me, if working in the
same kind of environment, or give me some advice for tuning the server ?
Thanks in advance !

Arnaud

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
| Arnaud Brion, Panalpina Management Ltd., IT Group     |
| Viaduktstrasse 42, P.O. Box, 4002 Basel - Switzerland |
| Phone: +41 61 226 19 78 / Fax: +41 61 226 17 01       |
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

<Prev in Thread] Current Thread [Next in Thread>