ADSM-L

Re: AIX memory tuning, needing some help !

2005-07-20 11:58:17
Subject: Re: AIX memory tuning, needing some help !
From: "Mark D. Rodriguez" <mark AT MDRCONSULT DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Wed, 20 Jul 2005 10:57:52 -0500
Hi,

First of all there is nothing in the short vmstat out put that concerns
me.  However, nobody can give any kind of performance tuning
recommendations based on what little information you are providing.
Performance tuning requires several different measurements to be taking
over an extended period of time before you can even begin to understand
how the system is performing let alone decide how to make it perform
better!  Therefore, the old saying "if its not broke don't fix it"
applies.  If you say you are not perceiving any performance issues then
you can leave it alone.

Now in regards to the vmo and ioo commands you are proposing, I don't
think you have a need to change the parameters at this time.  However,
if performance becomes an issue there has been several threads on this
list discussing tuning AIX parameters for a TSM environment.  A couple
things about your proposed changes, setting maxperm to a lower number is
commonly done on TSM systems setting the strict_maxperm however can be
problematic so I would avoid it.  In fact, I think you should look at
the rbrw mount option.  It is a much better way to control memory usage
for file I/O.  You can set this as a mount option for all file systems
that contain TSM DB, LOG and Storage Pool volumes.  Increasing maxfree
is helpful and necessary if you increase the maxpgahead.  Now, deciding
what values to use for maxpgahead, minpgahead, and maxfree can be a
complicated process.  First of all you are setting the minpgahead value
appears arbitrarily large which will not increase performance but may in
fact decrease performance as well as have a negative impact on memory
utilization.  In order to come up with an appropriate size you must
understand how the data is placed on the disks.  I don't won't to get to
deep here but this can be further complicated by issues with SAN arrays
and disk virtulization.  But the simply answer is that the minpgahead
should be set so that on the first read ahead all the disk that have
data spread across them (assumes using either RAID 0 or 5) will have an
I/O request sent to it. Then, maxpgahead should be set so that all the
disk queues are filled by a maxpgahead.  And finally maxfree should be
set so that the difference between maxfree and minfree is equal to
maxpgahead.  Here is a very simply example, my data is on a RAID 0 array
and is spread across 8 disk and each disk has a que_depth (another
tunable parameter on the device level) of 8.  I would set my minpgahead
to 8, my maxpgahead to 64 (8 drives time que_depth of 8) and I would set
my maxfree to 184 (based on the default minfree of 120).

I hope that helps.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===============================================================================
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE
===============================================================================



PAC Brion Arnaud wrote:

Hi gang,

I just moved my TSM server (5.3.1) to a brand new P550 (AIX 5.3), 8 GB
RAM : performance seems OK, but nmon and vmstat outputs are making me
feeling "uncomfortable" , in that they report very, very high page
faults ...
This is how it looks like :

vmstat 5

System configuration: lcpu=8 mem=7840MB

kthr    memory              page              faults        cpu
----- ----------- ------------------------ ------------ -----------
r  b   avm   fre  re  pi  po  fr   sr  cy  in   sy  cs us sy id wa
2  0 860485 892282   0   0   0   0    0   0 429 202048 7586 56 24 20  0
4  0 860306 892372   0   0   0   0    0   0 479 200803 7870 55 24 20  0
4  0 860341 892520   0   0   0 104  565   0 541 203473 10387 56 24 19
1
5  0 860204 892637   0   0   0  51  271   0 417 203636 8503 56 24 20  0
4  0 860571 892118   0   0   0   0    0   0 404 202747 7691 56 24 20  0
4  0 860362 892475   0   0   0  59  356   0 445 202569 8441 56 24 20  0
2  0 860255 892429   0   0   0   0    0   0 393 201914 8035 56 24 20  0
6  0 860528 892132   0   0   0   0    0   0 381 202421 7390 56 23 20  1
5  0 860395 892417   0   0   0  60  375   0 382 202787 7686 56 23 20  0
4  0 860217 892443   0   0   0   0    0   0 347 202069 7069 56 23 21  0
3  0 860256 892512   0   0   0  51  280   0 347 203474 6887 56 23 20  0
2  0 860437 892438   0   0   0  51  305   0 386 202040 7862 56 23 20  0
3  0 860245 892478   0   0   0   0    0   0 371 202359 7320 56 24 20  0
5  0 860520 892179   0   0   0   0    0   0 338 200080 6515 56 23 21  0
4  0 860228 892447   0   0   0   0    0   0 345 203863 6916 56 24 20  0

As you can see, paging is mostly happening at "sy" level and does not
seem to affect performance : cpu wait = 0.

I have pretty limited knowledge of memory tuning, and was wondering what
this means, if it affects performance for my server, and if this can be
avoided using some improved vmo and/or ioo settings.

Actually I have this (largely inspired by my readings about memory
tuning in this list) :

vmo  -o maxperm%=40 -o strict_maxperm=1   -o maxfree=376  -o
maxclient%=20
ioo -o minpgahead=128 -o maxpgahead=256 -o numclust=1

Does this make sense to your eyes, or is there something I should modify
?
Thanks in advance for any explanations and/or advices !

Arnaud

************************************************************************
******
Panalpina Management Ltd., Basle, Switzerland, CIT Department
Viadukstrasse 42, P.O. Box 4002 Basel/CH
Phone:  +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
Direct: +41 (61) 226 19 78
e-mail: arnaud.brion AT panalpina DOT com
************************************************************************
******




<Prev in Thread] Current Thread [Next in Thread>