ADSM-L

Re: Limitation of TSM DB Volumes

2003-04-12 08:51:40
Subject: Re: Limitation of TSM DB Volumes
From: Roger Deschner <rogerd AT UIC DOT EDU>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Fri, 11 Apr 2003 23:59:43 -0500
It's physical disk arms. More is better, much better. Regardless of the
number of threads, when you have more arms, more of your threads can be
doing I/O simultaneously. Among other things, this means that DBBackup
and Expiration will interfere less with other things, such as client
backups and restores. What you want to watch is the multiprogramming
level, not the thread count. However, I don't know how to measure that.
It is bad to stripe the TSM DB.

JBOD is better than RAID5.

LVM is better than raw vols - a whole lot safer.

One DBVol per physical disk is best, but the penalty for having more
than one is not too terrible, as long as they are physically adjacent.

Mirroring the DB is good. (RAID1) Do it in TSM (good), the OS (faster
but more work for you) or in hardware (fastest of all). One advantage of
TSM Mirroring over AIX LVM Mirroring is that the halves of a TSM
Mirrored pair can be in different AIX Volume Groups.

Those drawers of 9GB SAA disks are very nice disks for the TSM DB, and
they sure are cheap at used equipment dealers. Bigger disks are less
useful for the TSM DB.

Expiration and DBBackup are single-threaded, and these will prove to
limit your ultimate database size. So, if an opportunity presents itself
to split your TSM users into two groups easily, consider a second server
image.

Once you have more than one server (on same machine is OK) then you have
an opportunity to run DBBackups to disk, instead of tape. Basically, one
server's DBBackups are written to the other as archive data, which it
then migrates, expires, reclaims, etc. the whole 9 yards just like any
other TSM client node's archive data. This saves a LOT of tapes by
combining many incremental DBBackups onto a single tape. Setting it all
up this way is explained in the TSM Admin Guide.

The above all applies to the Database. The Log is different, and behaves
much more like normal data. It works fine in RAID5, for instance.

Roger Deschner      University of Illinois at Chicago     rogerd AT uic DOT edu


On Fri, 11 Apr 2003 asr AT UFL DOT EDU wrote:

>=> On Thu, 10 Apr 2003 16:50:32 -0400, Fred Yang <FYang AT NSHS DOT EDU> said:
>
>> Since we estimated our DB size will grow to around 180G, and performance is
>> our primary concern. So if increasing DB volumes could really benefit
>> performance, I don't mind the addtional space used for LVM overhead unless
>> the overhead affect performance consequently.
>
>IMHO, "number of threads" is probably not an interesting knob to turn unless
>you are pretty confident that all the -other- performance implications of your
>differing designs are close to identical.  If I'm recalling correctly, the
>distilled Advice From The List is less focused on actual count of volumes, and
>more focused on making the volumes align neatly with what's underneath them.
>
>
>So, if you've got (say) a full drawer of 9-GB SSA disks (*koff* like me) then
>you have 12 DB volume mirrors, for a total of 6*9 GB raw space, two more log
>volumes, and two volumes waiting in the wings to be thrown in, in case of
>emergency.
>
>If I had a quarter-drawer of 36-GB drives, I'd have four volumes, one per
>drive spindle.  At that point you have to debate
>
>increase-in-performance-from-two-threads
> vs.
>decrease-in-performace-from-head-contention
>
>which in my prejudice is no contest.  I want my reads and writes serialized
>over drive heads.  Of course, that prejudice is not informed by
>experimentation, which state I'll remedy as soon as one of you would like to
>donate some 36GB SSA to me. ;)
>
>
>If your database is deployed on a Shark, then you don't care about the heads,
>you're writing to RAM.  Make the volumes big enough that they aren't a pain to
>manage, and don't worry about the threading; it won't be your bottleneck.
>
>
>- Allen S. Rout
>

<Prev in Thread] Current Thread [Next in Thread>