ADSM-L

Re: Cache hit percentage

2003-04-10 14:32:20
Subject: Re: Cache hit percentage
From: Guillaume Gilbert <guillaume.gilbert AT DESJARDINS DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Thu, 10 Apr 2003 14:31:42 -0400
No I don't because I don't think it could help me. It would just increase the 
bufpoolsize after every expiration which I already do.

Guillaume Gilbert
Storage Administrater
CGI Montreal




Dave Canan <ddcanan AT ATTGLOBAL DOT NET>@VM.MARIST.EDU> on 2003-04-10 14:29:15

Veuillez répondre à "ADSM: Dist Stor Manager" <ADSM-L AT VM.MARIST DOT EDU>

Envoyé par :      "ADSM: Dist Stor Manager" <ADSM-L AT VM.MARIST DOT EDU>


Pour : ADSM-L AT VM.MARIST DOT EDU
cc :
Objet :     Re: Cache hit percentage

Are you also using selftunebufpoolsize?


At 02:12 PM 4/10/2003 -0400, you wrote:
>Hi all
>
>I've been living with a cache hit percentage at 97-98% for over 2 years
>now. I've always wanted to get to 99 but have never been able to. I
>finally got my ram doubled
>from 1.5 to 3 gb 2 weeks ago. Since then I've been steadily upping the
>bufpoolsize with no results whatsoever. Here are the details of the system :
>
>F80 with 4 processors and 3 GB of ram
>AIX 5.1
>TSM 4.2.3.3
>Db sits on 15 gb 10k rpm Hitachi disks in a 7700e (raid5). The database is
>alone on the raid group.
>2 FC cards for disks connected through Inrange FC9000.
>
>As you can see the database is not mirrored
>
>Volume Name       Copy    Volume Name       Copy    Volume
>Name       Copy    Available  Allocated      Free
>(Copy 1)          Status  (Copy 2)          Status  (Copy
>3)          Status      Space      Space     Space
>
>(MB)       (MB)      (MB)
>----------------  ------  ----------------  ------  ----------------
>------  ---------  ---------  --------
>/usr/local/tsm/-  Sync'd                    Undef-
>Undef-      6,800      6,800         0
>  bd01/db.bd01                                ined                      ined
>/usr/local/tsm/-  Sync'd                    Undef-
>Undef-      6,800      6,800         0
>  bd02/db.bd02                                ined                      ined
>/usr/local/tsm/-  Sync'd                    Undef-
>Undef-      6,800      6,800         0
>  bd03/db.bd03                                ined                      ined
>/usr/local/tsm/-  Sync'd                    Undef-
>Undef-      6,800      6,800         0
>  bd09/db.bd09                                ined                      ined
>/usr/local/tsm/-  Sync'd                    Undef-
>Undef-      7,168      1,168     6,000
>  bd09/db02.bd09                              ined                      ined
>
>My bufpoolsize is now at 921600 (900 mb) and heres the q db f=d :
>
>           Available Space (MB): 34,368
>         Assigned Capacity (MB): 28,368
>         Maximum Extension (MB): 6,000
>         Maximum Reduction (MB): 3,512
>              Page Size (bytes): 4,096
>             Total Usable Pages: 7,262,208
>                     Used Pages: 5,568,905
>                       Pct Util: 76.7
>                  Max. Pct Util: 78.0
>               Physical Volumes: 5
>              Buffer Pool Pages: 230,400
>          Total Buffer Requests: 298,089,579
>                 Cache Hit Pct.: 97.84
>                Cache Wait Pct.: 0.00
>            Backup in Progress?: No
>     Type of Backup In Progress:
>   Incrementals Since Last Full: 0
>Changed Since Last Backup (MB): 32.38
>             Percentage Changed: 0.15
>Last Complete Backup Date/Time: 04/10/2003 09:00:56
>
>This is at 2:00pm today. I don't really have a performance problem
>(atleast I hope not...).  Expiration runs between 1 and 2 hours examining
>over 4 million files,
>deleting between 100,000 and 500,000.
>
>Heres what vmtune says :
>
>vmtune:  current values:
>-p       -P        -r          -R         -f       -F       -N        -W
>minperm  maxperm  minpgahead maxpgahead  minfree  maxfree  pd_npages
>maxrandwrt
>74189   222569       2          8        120      128      65536      128
>
>-M      -w      -k      -c        -b         -B           -u        -l    -d
>maxpin npswarn npskill numclust numfsbufs hd_pbuf_cnt lvm_bufcnt lrubucket
>defps
>629137   16384    4096       1     186        640          9      131072     1
>
>-s              -n         -S         -L          -g           -h
>sync_release_ilock  nokilluid  v_pinshm  lgpg_regions  lgpg_size
>strict_maxperm
>0               1           0           0            0        0
>
>-t           -j              -J               -z
>maxclient  j2_nPagesPer j2_maxRandomWrite  j2_nRandomCluster
>222569           32            0                  0
>
>-Z                  -q                    -Q                -y
>j2_nBufferPer  j2_minPageReadAhead  j2_maxPageReadAhead   memory_affinity
>512              2                    8                 0
>
>-V                  -i
>num_spec_dataseg  spec_dataseg_int
>0                512
>
>PTA balance threshold percentage = 50.0%
>
>number of valid memory pages = 786421     maxperm=30.0% of real memory
>maximum pinable=80.0% of real memory        minperm=10.0% of real memory
>number of file memory pages = 343322      numperm=46.3% of real memory
>number of compressed memory pages = 0     compressed=0.0% of real memory
>number of client memory pages = 0 numclient=0.0% of real memory
># of remote pgs sched-pageout = 0    maxclient=30.0% of real memory
>
>I also have a smaller TSM server running on the same machine.The database
>is just under 1 gb and the bufpoolsize is at 256000 and I can't get the
>cache hit percentage
>over 97.
>
>So, are the vmtune settings really that bad? I am not a AIX sysadmin and
>haven't quite understood all this vmtune thingy.
>
>Any help or recommendation will be appreciated.
>
>Thanks in advance
>
>Guillaume Gilbert
>Storage Administrater
>CGI Montreal

Dave Canan
TSM Performance
IBM Advanced Technical Support
ddcanan AT us.ibm DOT com





<Prev in Thread] Current Thread [Next in Thread>