ADSM-L

Re: Need to Shrink Storage Pool

1998-06-27 18:11:21
Subject: Re: Need to Shrink Storage Pool
From: Trevor Foley <Trevor.Foley AT BANKERSTRUST.COM DOT AU>
Date: Sun, 28 Jun 1998 08:11:21 +1000
Hi Ken,

It sounds like your current storage pool does not have collaction
enabled, right? So, the first thing would be to do that by using the
command UPDATE STGPOOL storage-pool-name COLLATE=YES. Once this is done,
all new volumes created in this storage pool will have collocation
enabled.

ADSM isn't going to be able to do the whole thing for you automatically
through reclaimation because you won't be able to distinguise between
the old non-collacated volumes and the new collacated volumes.

One option would be to create a new storage pool
*       point all existing copy groups that use the existing pool to the
new pool (UPDATE COPYGROUP DESTINATION=new-pool, remembering to do an
ACTIVATE POLICYSET to make the changes current)
*       update any other storage pools to point to the new pool (UPDATE
STGPOOL NEXTSTGPOOL=new-pool)
*       update the current storage pool to set it's NEXTSTGPOOL to the
new pool
*       update the current storage pool to be readonly (UPDATE STGPOOL
ACCESS=READONLY)

You could then force reclaimation on all of the old storage pool volumes
by dropping the reclaimation threshold (UPDATE STGPOOL RECLAIM=%). You
are better off doing this little by little as ADSM seems to pick the
volumes with the most amount of data (within reclaimation limits) rather
than the least, which seems the wrong way round.


Regards,

Trevor
        -----Original Message-----
        From:   Ken Rackow [SMTP:Ken_Rackow AT EM.FCNBD DOT COM]
        Sent:   Sunday, 28 June 1998 4:21 am
        To:     ADSM-L AT VM.MARIST DOT EDU
        Subject:        Need to Shrink Storage Pool

             Over the past couple of years, I have done an
extraordinarily poor job
             of managing my ADSM storage pool.  The result of this
neglect has been
             to create a situation in which backup files are scattered
over a truly
             HUGE number of cartridges.  For example, I recently ran
SHOW
             VOLUMEUSAGE against one 20GB node and the output indicated
that this
             node was currently using more than 300 volumes.  The entire
storage
             pool, which is intended to be a backup for approximately
250GB, is
             composed of nearly 8000 volumes.

             Because of the potential impact of this on the restore of a
large file
             space, backed up over years, I have to try to get this
situation under
             control immediately.  In answer to a related post to this
list, it was
             suggested that I could use MOVE DATA to force a reclamation
to reduce
             the number of volumes needed to restore a node.  Although I
have
             looked at the MVS Admin Guide and the MVS Reference, I have
not been
             able to figure out how to use this command in my situation.
Also, is
             there a way to change the reclamation thresholds for the
tape pool to
             force ADSM to work on this full time and for as long as it
takes?

             Is there anyone that can help me with this?  I'm looking
for very
             specific instructions on how to use MOVE DATA or any other
             administrative command or approach to reduce the number of
volumes I
             would need to restore a file space, and in general to
reduce the size
             of the entire pool.  Obviously, since we're talking about
production
             data, any approach involving the deletion of existing file
spaces
             would be unacceptable. Ideally, I'd like to try this
against one node,
             to see how it works.  I appreciate that this will require a
large
             amount of system resources, including CPU and tape, and
might be a
             painful, time-consuming process.  Any help I can get here
would be
             greatly appreciated.

             My MVS server is at version 3, release 1, level 1.2

             My clients are mostly at version 3

             Thanks,

             Ken Rackow
<Prev in Thread] Current Thread [Next in Thread>