ADSM-L

Re: Limitation of TSM DB Volumes

2003-04-12 11:14:37
Subject: Re: Limitation of TSM DB Volumes
From: Paul Ripke <stixpjr AT BIGPOND.NET DOT AU>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Sun, 13 Apr 2003 01:14:13 +1000
Note: I'm not talking striping here - I'm talking increasing the
number of I/Os outstanding on each "disk" to make use of the smarts
in the drive firmware, to *improve* random I/O performance.

Each chunk of work TSM is doing (processes, sessions, anything else?)
can
generate DB I/O, which I believe is synchronous for each work thread. It
is my understanding that each DB volume in TSM has an I/O thread, which
again does its I/O synchronously (thus not requiring asynchronous I/O
support in the OS). If you only have one DB volume on a disk, there can
only be one I/O outstanding at any time for that disk, and so the smarts
in the disk have little to do (maybe read-ahead, write cache and write
re-order). If you have say, four DB volumes on a disk, and enough TSM
"work" threads to keep the four I/O threads busy, you can have four
I/Os outstanding to the disk, and it can start re-ordering I/Os to
minimise seek time.

A similar concept is the "queue depth" parameter on hdisks in AIX. Try
setting it to one, and watch your random I/O performance drop through
the floor. We hit this when using a HDS disk unit - Oracle really
crawled.

Not all disks support this concept - in SCSI land it is "command-tagged
queuing", and has been around since, I believe, SCSI-2. ATA disks have
picked it up relatively recently.

On Saturday, Apr 12, 2003, at 15:31 Australia/Sydney, Roger Deschner
wrote:

I actually tested this. Striping is very, very bad for the TSM
Database.
1:1 is best.

TSM Database I/O is totally random. Response time is much more
important
than throughput - it never transfers a large amount of data in a single
operation. It's all small amounts of data, scattered randomly across
the
entire database. Therefore trying to get several physical disks working
at once to do one I/O operation just slows things down. It's faster to
move only one arm for one I/O, and let the other arms move for other
threads simultaneously. Most of the "experts" you talk to will focus on
Throughput, which is why they are wrong about the TSM Database. I had
those same experts telling me how to do things in my shop, which is why
I actually tried striping, and found out that it ran a lot slower.

However, you are correct that the Log and Storage Pool volumes are
different than the database. Especially in the case of Storage Pool
volumes, you want throughput. Let our experts have their way here.

Roger Deschner      University of Illinois at Chicago
rogerd AT uic DOT edu


On Sat, 12 Apr 2003, Paul Ripke wrote:

My gut feel (and the advice from an IBM TSM support guru here in
Australia) is to have 3 or 4 volumes per spindle for database disks.
Those "spindles" may be logical RAID5 arrays as in a shark, etc. The
reason, is to allow 3 or 4 outstanding requests to each disk, which
the disk can then re-order to minimise over-all seek times. Since the
TSM DB is primarily hit with random read I/O, this *should* be a win.

Of course, trying to test this and come up with some hard numbers
would be a right-royal PITA... The Sun TSM servers I manage used to
have the "big & few" DB volume design, but I have since migrated to
3 volumes per spindle, mirrored in TSM design. I can't say if
performance is greatly improved, and I have no numbers, but it
"feels" faster. Makes me wish I had recorded expiration start and
finish times (although, that's probably a single thread in TSM and
won't prove anything)...

Since log volumes are sequential read-write, this does not apply to
them. And storage pool volumes? That depends on a whole bunch of
other factors.

Cheers,
--
Paul Ripke
Unix/OpenVMS/TSM/DBA
101 reasons why you can't find your Sysadmin:
68: It's 9AM. He/She is not working that late.
-- Koos van den Hout



--
Paul Ripke
Unix/OpenVMS/TSM/DBA
101 reasons why you can't find your Sysadmin:
68: It's 9AM. He/She is not working that late.
-- Koos van den Hout

<Prev in Thread] Current Thread [Next in Thread>