ADSM-L

DATASET NOT ALLOCATED

2015-10-04 18:08:07
Subject: DATASET NOT ALLOCATED
From: INTERNET.OWNERAD at SNADGATE
To: Jerry Lawson at ASUPO
Date: 5/28/97 12:22PM
I have been following this thread with interest because I have experienced
problems in this area, and am not clear on how this whole process operates.
I wonder if we don't have some inconsistencies with how the MVS server
allocates drives, vs how another server might do it (such as an AIX machine.)
Please bear with me, because I am not an AIX or UNIX guru.

For the sake of simplicity (and not to start a hissy fit with some of the
members of this list), let me categorize the servers by saying we have S/390
machines, and non-S/390 machines.

On the non-S/390 machines, **Usually** they don't share drives with other
tasks.  Thus, if I have 4 drives attached to my machine, I define device
classes to ADSM that probably will total 4.  (There may be exceptions to
this, depending on workloads - perhaps someone can point out a couple of
common ones.)

However, on S/390 machines (MVS I am sure of, VM is not as clear), the number
of devices is usually much larger than the number ADSM generally needs.  We
**Usually** cannot afford to dedicate drives to ADSM (such as in an MVS
subpool), and so to maximize overall tape utilization, we share out of the
general tape pool.  Actually, my mind set on the mountlimit parm is more from
the point of how many devices is a reasonable number to let ADSM have from
the overall pool at any one time - being a good citizen and user of the tape
pool so to speak, and not a pig.

The problem occurs for me (and I suspect most people) when there is a crunch
for us it is at month end processing, when we close our company books.  At
that time, because of the amount of work to be done, we could probably
support at least twice the number of drives that we have installed.  The
other 29 days of the month these will sit idle, so the bean counters will not
allow us to get more.

So what happens?  On these days, ADSM must compete for tape drives in a pool
that often has NO available drives.  Thus we see the messages that were
outlined in the original posting below.  I think this causes some problems
with ADSM as it competes with MVS allocation.  Here is how I understand the
problem......

Prior to the maintenance fixes (beginning with L10???? - we went from 7 to
12, and so I am not completely in tune with what happened when), the method
of tape allocation changed.  Originally, an Enqueue for SYSZTIOT was issued
when an allocation request was made.  If no drives were available, ADSM would
wait, holding the Enqueue.  If left long enough, ADSM would drag to a halt
until the tape was actually mounted.  (BTW we run JES3, but I don't think
that has any real effect here).

After the method of allocation was changed, (APAR PN87848 - I am paraphrasing
from the cover letter now), ADSM will try the allocation, and if no drives
are available, it will wait 10 seconds, and then reissue the command.  This
can happen up to 30 times - 300 seconds or 5 minutes).  If no drive is
available then, a WTOR is issued (ANR5373).

A new problem induced here is that now, ADSM does not get the next available
drive **UNLESS it is very lucky**  I say this based on what I have seen on
JES3 - if during the 10 second wait, a drive becomes available, JES3 will be
dispatched, and seeing a drive, and having a queue of batch work wanting
drives, will grab the drive for someone else.  When your timer pops, you wake
 up to find no drives available.  Thus I believe the new fixes add to the
problem, not take away from it.  Now you can automate the reply, as someone
suggested, but then as the cover letter states, you go back into an Enqueue
on SYSZTIOT, and the original problem returns.

I might add that I thought I heard that another option was added in another
APAR that **Might** have been included in the level 12 fixes, but a search of
the ANR2210 member in SYS1.SAMPLIB has not turned up anything that seemed to
support this.

NOW for the good part!!!!  I get to state my opinions!!!!

What seems to be happening, is that ADSM is holding on to the drives that it
has, but is not using them effectively!  As it was explained to me - If ADSM
reached end of volume on a tape, a mount request is issued for a new tape, on
a new drive (I believe this is the same whether this is for a scratch mount,
or for another volume on an input tape).  The logic behind this, as explained
to me, is to improve performance - you don't wait for the unload/rewind to
complete - just get a new drive and go.

When all is well, and there are drives available - this is great!  But if MVS
has no drives, but ADSM has not reached the mountlimit, problems result.
When I saw this problem before, what would happen is as soon as a drive
became available, and a mount was issued, ADSM started to go again, and the
next thing that happened in the MVS log was the dismount/remove for the old
tape.  In one case, we waited for a tape for 20 hours, holding 2 drives and
needing a third.  When the operator recognized it finally, (no smart comments
please) and varied on another drive, down came one of the mounted tapes!

What I think ADSM needs is to have it **REUSE** the existing drives.  If it
already holds a drive, and can't get another, either mount a new tape on this
drive (as MVS does for the next real of a multi-reel file), or if this is not
possible, then at least release the existing tapes that you are done with.
If you want, make this optional.  I would think that this is not as big a
performance penalty as one might think, especially since many tapes use
serpentine technology now, so when they are at end of reel, the tape is
actually back at the beginning, and no lengthy rewind is required.

So Carol - what do you think?  Am I really off base?  Anyone else want to
comment?

end of reel - oops - I mean soapbox...... :-)

Jerry Lawson
jlawson AT itthartford DOT com



______________________________ Forward Header __________________________________
Subject: DATASET NOT ALLOCATED
Author:  INTERNET.OWNERAD at SNADGATE
Date:    5/28/97 12:22 PM


Hi,
The IEC messages are for a tape that was mounted on a drive, the next
2 messages are for the allocation of another tape, that did not occur
because there weren't enough drives available.  You should check your
mount limit for your tape devclass and make sure it is appropriate for
the drives you have available.  The behavior of ADSM at this point
depends on the level, some fixes have gone in the more recent levels on
how the allocation is handled, with an operator reply if the allocation
does not succeed.  ADSM opens storage volumes with prefix.BFS - the same
dataset name is used for all tape storage volumes.  During dynamic
allocation ADSM adds another qualifier so that each allocation request is
unique.
  Carol Trible
 ADSM Development

>>IEC705I TAPE ON 360, A04425,SL,COMP,ADSM,ADSM,ADSM.BFS
>>IEC271I MESSAGE DISPLAY 'A04425' ON 360 ISSUED BY JOB ADSM
>>IKJ56241I DATA SET ADSM.BFS.V206 NOT ALLOCATED+
>>IKJ56241I NO UNIT AVAILABLE
<Prev in Thread] Current Thread [Next in Thread>