ADSM-L

Volume Usage

1996-02-20 19:36:23
Subject: Volume Usage
From: "Paul L. Bradshaw" <pbradshaw AT VNET.IBM DOT COM>
Date: Tue, 20 Feb 1996 16:36:23 PST
Remember you can limit the number of volumes used for Co-location.  If you
have an open ended storage pool, then you will receive close to perfect
colocation (1 or more volumes per user, users do not share volumes).  If you
have a closed-end or limited storage pool, then volumes will be shared
among users as they fill.  Adding volumes to the storage pool as required
will help control the spread of data.

So, for 5,000 users you may only wish to allocate 2,000 volumes if this is
sufficient to hold the data from those users.

The problem comes in how to do this in a practical manner.
Full volume dumps on a  weekly basis from each client is not a good idea
(way to much data to process in a given window) so you have to come up with
a perpetual incremental approach.  We call ours progressive incremental.

Doing this leads to some back end management challenges.  Colocation is one
technology used to help solve this.  If you never do full dumps, you need a
way of locating data for a given system together.  One way to do this
for offsite and onsite stg. pools is to use disk cache space.  When data is
on disk, do stg. pool backup from there.  Then do the migrations to tape
for the primary storage pools.  For the overflow situations then tape to
tape works fine (we are also looking at synchronous copies in addition to
the current asynchronous mode).

Now, if colocation is on, tape reclamations in the offsite pool will require
fewer tape mounts to re-copy that data to another tape.  You can also
choose to do a full stg. pool backup once or twice a year if that meets
your requirements better.

The basic problem is that this data is getting big, and how do you manage
it in a reasonable manner without propagating storage devices all over the
place.  Suggestions for alternative solutions are actively sought to help
discover better solutions to this problem.

Paul Bradshaw
<Prev in Thread] Current Thread [Next in Thread>