ADSM-L

Re: A little database performance foo

2006-10-21 21:06:16
Subject: Re: A little database performance foo
From: "Allen S. Rout" <asr AT UFL DOT EDU>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Sat, 21 Oct 2006 21:05:09 -0400
>> On Sat, 21 Oct 2006 14:00:06 -0700, Jason Lee <english AT ANIM.DREAMWORKS 
>> DOT COM> said:


> I've beed trying my damnedest to get some actual performace out of my
> database here and have come to  the conclusion that IBM was not lying
> to me and that the database is, in fact, not all it might be.

Oh, they're telling you the truth. :) DB architecture is one of the
fundamental gripes about TSM by initiates.  "We're considering moving
to a DB2 architecture" has been dangled before us since at least '98.
I'm looking forward to the notion, but I can't say I'm sorry they're
moving carefully.

[...]

> I have N clients all starting at the same time. They all are going
> to request ~100MB of data (or at least that is the number being
> reported by q session as bytes sent when they have been running for
> a while).  The clients are running on very lightly loaded boxes and
> can consume anything the server can throw at them (or anything it's
> likely to throw, anyway) from the database.

Well, the party-line answer to that problem is that you don't want to
start them at anything like 'the same time'.  This is kind of skew to
your actual question, but may help your operational problem.  If you
set the DURation of your schedules to be somewhat higher, then (since
actual start times are scattered over the first half of that duration)
your contention will go down.

Also or alternately, you may want to run some (most?) of your clients
INCRBYDATE many days of the week. This would reduce the impact of the
initial download at the cost of some DB inflation and an opportunity
to miss, for a while, files which are new but have old dates.
(unpacked archives are the easiest example of such).


If this is the same server that has a 500G DB, then that may explain
why you're seeing this performance problem.  That's broadly considered
"Freakin' HUGE", and most folks find some other portion of their DB
performance to become unacceptable long before they get that big.  How
long does it take you to do a DB backup, anyway?

For ~350GB of DB, I've got 11 servers, 9 customer-facing, on one box.
Now, I'm using relatively sucky old hardware (18G 10K SSA drives), but
that means I've got 9 threads wandering around a smaller DB
displacement.

> Does this make sense, or have I been up too long?

Heh, I'll note these need not be exclusive. :)


- Allen S. Rout

<Prev in Thread] Current Thread [Next in Thread>