ADSM-L

Re: TSM 5.1 on Solaris 8 64-bit - performance tuning question

2002-11-11 18:08:48
Subject: Re: TSM 5.1 on Solaris 8 64-bit - performance tuning question
From: Kent Monthei <Kent.J.Monthei AT GSK DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Mon, 11 Nov 2002 18:04:15 -0500
Ricardo, thanks.  However, MaxNumMP is already set to 4 for this node.

I should add that the database is spread across approx 20 filesystem
volumes/mountpoints.  All are configured to go direct-to-tape via INCLEXCL
management class bindings.  Presently, I see 4 server sessions for the
node, but still only see 2 mounted tape volumes, and only 2 of the 4
sessions are sending substantial amounts of data to the server.  The other
2 sessions are in IDLEW status, with wait-times of 25-40 minutes.

Kent Monthei
GlaxoSmithKline





"Ricardo Ribeiro" <ricardo.ribeiro AT ADVANCEPCS DOT COM>

Sent by: "ADSM: Dist Stor Manager" <ADSM-L AT VM.MARIST DOT EDU>
11-Nov-2002 15:13
Please respond to "ADSM: Dist Stor Manager" <ADSM-L AT VM.MARIST DOT EDU>




        To:     ADSM-L

        cc:
        Subject:        Re: TSM 5.1 on Solaris 8 64-bit - performance tuning 
question

Try using this value "Maximum Mount Points Allowed=4" to update your
client
node, this should tell the client to use this many drives...



                      Kent Monthei
                      <Kent.J.Monthei@         To: ADSM-L AT VM.MARIST DOT EDU
                      GSK.COM>                 cc:
                      Sent by: "ADSM:          Subject: TSM 5.1 on Solaris
8 64-bit - performance tuning question
                      Dist Stor
                      Manager"
                      <[email protected]
                      T.EDU>


                      11/11/2002 11:49
                      AM
                      Please respond
                      to "ADSM: Dist
                      Stor Manager"






We have a 1.2TB (& growing) Oracle Data Warehouse on one domain of a Sun
Enterprise 10000 (E10K) Server.  The same E10K domain also has TSM 5.1.1.6
Server and TSM 5.1.1.6 Client installed and backs itself up to a
locally-attached SCSI tape library with 4 DLT7000 Drives.

We perform a database shutdown, a full cold backup of the OS filesystem,
then a database restart (no RMAN or TDP for Oracle involved).  The full
cold backup goes direct-to-tape.  Our objective is to keep all 4 drives
active near-100% of the time, to achieve the shortest possible backup
window.

We're trying to take advantage of ResourceUtilization in the newer
multi-threaded TSM Client, but I'm having trouble getting the Client to
consistently start/maintain 4 data sessions to tape.  ResourceUtilization
is set to 8.  Throughout most of the backup, 5-6 sessions are active.
However we are only seeing 2 mounted tapes most of the time, and the
backup duration is nearly twice what it should be.

Right now, we are not using Shared Memory protocol (disabled due to some
'dsmserv' crashes that failed to release shared memory).  We are using
tcpip protocol, and are using TCPServerAddress=127.0.0.1 (localhost) for
all tcpip sessions.

Does anyone know a way to force a single 'dsmc sched' process to start a
minimum number of threads (>= #tape drives), or know probable reasons why
our
configuration isn't doing it now?

- rsvp with comments & tuning tips, thanks.

Kent Monthei
GlaxoSmithKline
Kent.J.Monthei AT GSK DOT com