ADSM-L

Database mirroring, again

2006-02-06 14:07:30
Subject: Database mirroring, again
From: Roger Deschner <rogerd AT UIC DOT EDU>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Mon, 6 Feb 2006 13:07:08 -0600
I know we've been over this, but times change, and technology changes.
Conventional Wisdom on this list has been that the best disk layout for
your TSM Database is:

JBOD disks, Raw volumes, mirrored by TSM, with 2 dbvols per physical
volume.

(This is what I am using now.)

I've got all the databse people I talk to here saying I'm crazy. They
say to use RAID, with lots of striping, and mirrored in hardware.
Example: IBM Redbook SG24-5511-01 "Database Performance Tuning on AIX".
Example 2: Oracle documentation. (At least I have talked them out of
RAID5, which we know to be a dog on writes, since client node backups
(lots of writes) are the worst database performance issue we have.)

After I disabused them of RAID5, they say:

Hardware RAID10, raw volumes, and no advice about dbvol size.

Has anybody actually tried both and can comment on the comparison? I'm
at the point where I need to enlarge my database anyway, and it's not
performing as well as I'd like, so I figure this is a good time to
change its layout.

I'm thinking of:

Hardware 2x2 SSA RAID10 (2-way striped mirrored pairs), raw volumes, 4
dbvols per virtual RAID volume (hdisk), which is still 2 dbvols per real
disk.

There has been some discussion of relative database corruption risk with
TSM mirroring versus hardware or OS mirroring. Some say that TSM
mirroring gives you greater protection against software-caused
corrution. I don't get it, as long as MIRRORWRITEDB is set to Parallel.
Unless I'm missing something, TSM MIRRORWRITEDB PARALLEL should have the
exact same risk level as hardware/OS mirroring. Has anybody actually had
their skin saved by TSM mirroring, as opposed to hardware/OS mirroring,
or is this greater protection just hypothetical?

Roger Deschner      University of Illinois at Chicago     rogerd AT uic DOT edu

<Prev in Thread] Current Thread [Next in Thread>