ADSM-L

Re: Limitation of TSM DB Volumes

2003-04-14 09:30:30
Subject: Re: Limitation of TSM DB Volumes
From: Paul Ripke <stixpjr AT BIGPOND.NET DOT AU>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Mon, 14 Apr 2003 23:30:11 +1000
On Monday, Apr 14, 2003, at 18:21 Australia/Sydney, Richard L. Rhodes
wrote:

Our current TSM server uses Shark storage.  We run a 80gb TSM db on 2
shark raidsets (8 packs) of 18gb drives.  That's 16 spindles.  Each 8-
pack is configured into 1 lun, so the 2 luns show up in aix as 2
hdisks (well, actually 4 with 4 scsi paths, and dpo creating vpath
drives). We pull the vpaths into one volume group and put all of it
into one filesystem.  The logical volume for the filesystem is set
with the "maximum" parm, which alternates pp allocation across the
vpaths.  All 20 db volumes are in this same filesystem along with the
log(5gb), spread across all 18 spindles.

This was designed/setup by a local IBM TSM expert.  It has performed
very well over the years - we are very happy.

It's having 20 dbvols that makes the difference - that's 20 DB I/Os in
flight...

Anyway . . .

q)  What is the I/O size of the Oracle db?  It uses 4k pages
internally.  Is the I/O size issued by TSM 4k??

At minimum, Oracle uses the DB block size, usually 4K, 8K or 16K.
Oracle can also do read-aheads. I forget the names of the parameters.
I believe, but I haven't checked, TSM does 4k DB I/Os.

I am getting ready to bring up a new TSM server on EMC Clariion disk
subsystem.  It's default strip depth in a Clariion raid5 is 128k
(128k chunck of data per disk). This is much bigger than the tsm db
size of 4k.   Even dropping down to a 4k strip depth would fit one
I/O per physical disk drive (spindle).

For the TSM dbvols, I'd recommend have about 2 or 3 (even 4) per
physical
disk. This will give you extra parallelism and lower latency. Check the
stats below. More threads is better, especially with stripes and RAID5.

q)  Any idea what the I/O size is that TSM issues for a disk based
staging pool?  I can't find any info about his at all.  I would think
this would be a very large I/O.

Check the archives, I've run a ktrace/truss whatever on dsmserv
before...
According to <http://msgs2.adsm.org/cgi-bin/get/adsm0303/285/1.html> I
found it was 256K. I guess I'll have to believe that.

BTW: In regards to random I/O performance, here's some benchmarks I've
run in the past. The tool is something I wrote for exactly this kind
of job (similar to a tool called "rawio", I believe). -t specifies the
thread count, how many I/Os in flight - it *really* makes a difference:

On AIX with "Enterprise" disk unit (4 disk RAID5):
ksh$ iohammer -t 1 -w 0 -a -b 4k -c 10000 -f zz
Size 2097152000: 48.216 secs, 10000 IOs, 0 writes, 207.4 IOs/sec, 4.82
ms
ksh$ iohammer -t 2 -w 0 -a -b 4k -c 10000 -f zz
Size 2097152000: 30.934 secs, 10000 IOs, 0 writes, 323.3 IOs/sec, 3.09
ms
ksh$ iohammer -t 4 -w 0 -a -b 4k -c 10000 -f zz
Size 2097152000: 23.540 secs, 10000 IOs, 0 writes, 424.8 IOs/sec, 2.35
ms
ksh$ iohammer -t 8 -w 0 -a -b 4k -c 10000 -f zz
Size 2097152000: 21.585 secs, 10000 IOs, 0 writes, 463.3 IOs/sec, 2.16
ms
ksh$ iohammer -t 16 -w 0 -a -b 4k -c 10000 -f zz
Size 2097152000: 16.639 secs, 10000 IOs, 0 writes, 601.0 IOs/sec, 1.66
ms

On a Sun direct to a FC-AL 9 GB drive:
ksh$ sudo iohammer -t 1 -w 0 -b 4k -c 10000 -s 8g -f /dev/rdsk/c1t48d0s2
Size 8589934592: 80.083 secs, 10000 IOs, 0 writes, 124.9 IOs/sec, 8.01
ms
ksh$ sudo iohammer -t 2 -w 0 -b 4k -c 10000 -s 8g -f /dev/rdsk/c1t48d0s2
Size 8589934592: 75.682 secs, 10000 IOs, 0 writes, 132.1 IOs/sec, 7.57
ms
ksh$ sudo iohammer -t 4 -w 0 -b 4k -c 10000 -s 8g -f /dev/rdsk/c1t48d0s2
Size 8589934592: 56.416 secs, 10000 IOs, 0 writes, 177.3 IOs/sec, 5.64
ms
ksh$ sudo iohammer -t 8 -w 0 -b 4k -c 10000 -s 8g -f /dev/rdsk/c1t48d0s2
Size 8589934592: 47.712 secs, 10000 IOs, 0 writes, 209.6 IOs/sec, 4.77
ms
ksh$ sudo iohammer -t 16 -w 0 -b 4k -c 10000 -s 8g -f
/dev/rdsk/c1t48d0s2
Size 8589934592: 40.748 secs, 10000 IOs, 0 writes, 245.4 IOs/sec, 4.07
ms

On the same Sun, to a five disk VxVM stripe:
ksh$ sudo iohammer -w 0 -t 1 -b 4k -c 10000 -s 42g -f
/dev/vx/rdsk/data2dg/stripevol
Size 45097156608: 65.151 secs, 10000 IOs, 0 writes, 153.5 IOs/sec, 6.52
ms
average seek
ksh$ sudo iohammer -w 0 -t 2 -b 4k -c 10000 -s 42g -f
/dev/vx/rdsk/data2dg/stripevol
Size 45097156608: 35.080 secs, 10000 IOs, 0 writes, 285.1 IOs/sec, 3.51
ms
average seek
ksh$ sudo iohammer -w 0 -t 4 -b 4k -c 10000 -s 42g -f
/dev/vx/rdsk/data2dg/stripevol
Size 45097156608: 20.928 secs, 10000 IOs, 0 writes, 477.8 IOs/sec, 2.09
ms
average seek
ksh$ sudo iohammer -w 0 -t 8 -b 4k -c 10000 -s 42g -f
/dev/vx/rdsk/data2dg/stripevol
Size 45097156608: 13.323 secs, 10000 IOs, 0 writes, 750.6 IOs/sec, 1.33
ms
average seek
ksh$ sudo iohammer -w 0 -t 16 -b 4k -c 10000 -s 42g -f
/dev/vx/rdsk/data2dg/stripevol
Size 45097156608: 9.675 secs, 10000 IOs, 0 writes, 1033.6 IOs/sec, 0.97
ms

Cheers,
--
Paul Ripke
Unix/OpenVMS/TSM/DBA
101 reasons why you can't find your Sysadmin:
68: It's 9AM. He/She is not working that late.
-- Koos van den Hout