ADSM-L

Re: Commits to fragmented DB

2001-01-09 11:53:35
Subject: Re: Commits to fragmented DB
From: Richard Sims <rbs AT BU DOT EDU>
Date: Tue, 9 Jan 2001 11:53:51 -0500
>Recently we have been encountering problems with our recovery log filling up
>quickly.  Our DB is 38 gb and the recovery log is at the maximum amount of 5
>gb.  We've been told that our problem with the recovery log is due to our
>database being so fragmented, therefore it takes a long time to complete the
>commits.  Support was then surprised to hear that this is the same database
>that we've been using since ADSM Version 2.
>
>Is this so unusual???

Ann - There is the controversial dsmserv UNLOADDB which is too-superficially
      documented in the Admin Guide topic "Optimizing the Performance of the
Database and Recovery Log".  I say that it is superficially documented because
the verbiage says nothing about how long you can expect your database to be
out of commission, the risks involved, or the actual benefits, or how long you
can expect them to last.

But given that you have been using this same database for years (as many of us
have), some modernization may be in order.  In addition to basic server
performance configuration...

Database performance                    - Locate the database on disks which are
                                          separate from other operating system
                                          services, and choose fast disks and
                                          connection methods (like Ultra SCSI).
                                        - Spread over multiple physical volumes
                                          rather than consolidating on a single
                                          large volume: TSM gives a process
                                          thread to each volume, so performance
                                          can improve through parallelism.
                                        - Do 'Query DB F=D' and look at the
                                          Cache Hit Pct. The value should be up
                                          around 98%. If less, consider boosting
                                          the server BUFPoolsize option.

Even if you defragment the database, you can expect it to revert to its prior
condition in some time, which may be relatively short.  I think the better
approach is to optimize the performance of your configuration via hardware
methods, not the least of which is assuring lots of real memory for the
server.  After some years, various hardware upgrades are necessary in assuring
that servers meet growing demands.  I would work with your opsys people to
review the configuration of the server system, and gather some observational
statistics to determine what optimization can be performed.

  Richard Sims, BU
<Prev in Thread] Current Thread [Next in Thread>