Jerry,
Thanks for this work, and in particular sharing it.
Have you come up with any recommendations, like purging the cache one a
month or something like that. I presume you are not recommending to
stop caching alltogether.
Regards,
Simon
----------
| From: jlawson / mime, , , jlawson AT THEHARTFORD DOT COM
| From: jlawson / mime, , , jlawson AT THEHARTFORD DOT COM
| To: ADSM-L / mime, , , ADSM-L AT VM.MARIST DOT EDU
| Subject: ADSM Storage Pool Performance
| Date: Thursday, 16 December, 1999 1:06AM
|
| A couple of weeks ago, I posted a question to the list about our storage pool
| performance (the original note is attached to the bottom of this note.) I
| want to thank the people who took the time to respond and provide suggestions
| and comparisons to other devices.
|
| We did some analysis, and came up with the realization that, in the words of
| that comic strip icon Pogo "We have me the enemy, and he is Us!" (I
| apologize to those of you not fluent in American Comics.) What we
| hypothesized was that since we run our storage pools with caching enabled,
| and since we have been doing this for a period of at least 4 years, there was
| a good chance that the pool itself was extremely fragmented. We speculated
| that even a small request for space was being spread across more than one
| volume. The effect was that every data request was being scattered across
| the 25 volumes in almost a random order.
|
| The good news was that we were able to attempt to validate our idea very
| easily. We turned off caching on the pool, and then let it drain out over
| the weekend. On Monday we came in, and saw that there was now only about 4%
| residual data that was still being cached (we only go down to 5% as a low
| threshold). Stats for the devices have shown amazing improvement - device
| disconnect time has dropped from 79ms to 2ms. At the same time, the IO rate
| has increased accordingly. And more obviously, tasks such as migrations now
| run in less than half the time - just this morning we did a migration of
| approximately 32GB in less than an hour (running 4 concurrent processes.)
|
| Thanks again to all of you who responded.
|
| Jerry Lawson
| jlawson AT thehartford DOT com
|
|
| ______________________________ Forward Header
__________________________________
| Subject: ADSM Storage Pool Performance
| Author: Jerry Lawson at TEKPO
| Date: 11/29/99 9:17 AM
|
|
| Date: November 29, 1999 Time: 9:02 AM
| From: Jerry Lawson
| The Hartford Insurance Group
| (860) 547-2960 jlawson AT thehartford DOT com
| -----------------------------------------------------------------------------
| The guy who handles MVS Performance came to me with the some concern over the
| DASD response that ADSM was getting - specifically with our storage pool
| volumes. We did some analysis, and of course determined that the response
| problems were mapping to the times when migrations or storage pool backups
| were running. I asked him to write up what he saw as a question that
we could
| post to the list. Here is his observation:
|
| Our ADSM server storage pool is spread over 25 separate DASD volumes across 5
| different SVAs (Shared Virtual Array). During our backup and migration
| processes, we appear to get very poor response time for these DASD volumes.
| At sustained I/O rates of 1 to 1.5 I/Os per second spread across all of the
| pool volumes, our DASD response times for these volumes consistently rides at
| our above 150 milliseconds. Two thirds of this appears to be DISCONNECT with
| the other portion attributable to CONNECT for the most part. Device
| utilization for these volumes appears to ride around 15%. These DASD volumes
| compare very poorly - performance-wise - to the rest of our DASD farm.
|
| Do you have any thoughts or suggestions as to how we may address this or do
| you believe that this is simply a by-product of the product's normal
| workings. Any suggestions would be appreciated.
| --------
|
| I should probably add that the SVA is the latest and greatest STK DASD - the
| next upgrade after the RVA vesrion that IBM marketed. We use 3380K images as
| the device types, for a total of 34GB of DASD. I should note that during
| this same time, the DB, which is also mapped over the same set of devices,
| does not have a response problem.
|
|
| -----------------------------------------------------------------------------
| Jerry
|
| ---------------------------- Forwarded with Changes
---------------------------
| From: Jerry Lawson at TEKPO
| From: Jerry Lawson at TEKPO
| Date: 11/29/99 9:17AM
| To: ADSM-L AT VM.MARIST DOT EDU at SMTP
| Subject: ADSM Storage Pool Performance
|
-------------------------------------------------------------------------------
|
|
|