ADSM-L

Problems with migration from disk to tape storage pool

1999-08-23 10:48:58
Subject: Problems with migration from disk to tape storage pool
From: John Schneider <jdschn AT IBM DOT NET>
Date: Mon, 23 Aug 1999 09:48:58 -0500
Greetings,
    We are running ADSM 3.1.2.20 on a Solaris 2.6 system, with a
BreeceHill Q47 DLT library.  Previously we had NT and Novell
clients backing up data directly to DLT tape drives, but that was
not very fast because of the network and because the clients
(particularly the Novell servers) can not server the data as fast
as the DLT 7000 can stream.  Each client would grab a tape
drive and hold it for a couple hours at a time, but only back up
a couple hundred MB.
    So we set up a disk file pool of 3GB in size, and changed
the backup copygroups to point to it.  The disk file pool is
set up to migrate data with a high and low point of 0%, and
to migrate to the tape storage pool.  My understanding was that
when the clients started their backups they would write their data
simultaneously to the disk storage pool, and immediately a
migration would start which would grab a tape drive and
copy data out of the disk storage pool onto the tape.  Effectively
a bunch of slow clients would now be writing their data through
a disk pool to a single tape drive.  The throughput of one client
would not go up, but only one tape drive would be in use, instead
of four or five being used for hours.
    But it is not working as expected.  The disk storage pool is
gradually filling up, and the data is not migrating out of it.  When
the backup schedules first kick off, a migration starts running, and
a small amount of data gets dumped to tape, then the migration
completes.  More and more data gets written into the disk storage
pool, but each migration that runs only dumps a few MB worth
of data, and then ends.
    At first we thought that the migration was being interrupted by
other processes demanding a tape drive, but that is not the case.
In a controlled experiment with no sessions or processes running,
we can force a migration to occur by issuing an "update stgpool"
command and changing any parameter.  The migration starts,
accesses a tape volume, dumps a small handfull of files, and ends
with a successful completion code, even though the disk storage
pool is still 97% full.  And the upper and lower migration points
are both 0%!
    We have also tried making the upper migration point 5-20%
so it wouldn't start the migration so often, but it still refuses to
empty out the disk storage pool completely.
    Can anybody help me spot what is going wrong here?  We
are using disk storage pools elsewhere in the configuration and
migrating them to a tape storage pool via administrative schedule
a couple times a day, and those seem to work.  I don't see what
the difference is.
    Does the problem have anything to do with the fact that the
client schedules are writing data into the file pool at the same time
we are trying to empty it out?  Should these happen at different times?
If so we will need much larger disk storage pools to handle a
complete backup schedule.

Thanks in advance,

John Schneider

***********************************************************************
* John D. Schneider       Email: jdschn AT ibm DOT net * Phone: 314-349-4556
* Lowery Systems, Inc.
* 1329 Horan                  Disclaimer: Opinions expressed here are
* Fenton, MO 63026                   mine and mine alone.
***********************************************************************