ADSM-L

No disk-to-tape migration taking place

1995-08-09 20:18:21
Subject: No disk-to-tape migration taking place
From: Melinda Varian <[email protected]>
Date: Wed, 9 Aug 1995 20:18:21 EDT
I now understand (at one level) the problem I was having yesterday
with the server being unwilling to migrate files from our backup
disk pool to the cartridge pool.  QUERY STGPOOL for the disk pool
showed this:

q stgpool backuppool

Storage      Device       Estimated  %Util  %Migr  High   Low  Next
Pool Name    Class Name    Capacity                Mig%  Mig%  Storage
                               (MB)                            Pool
-----------  ----------  ----------  -----  -----  ----  ----  ----------
BACKUPPOOL   DISK           2,724.0  100.0   79.0    80    60  BACKTAPE
BACKUPPOOL   DISK           2,724.0  100.0   79.0    80    60  BACKTAPE

The documentation says that it is "%Migr" that must cross the high
migration threshhold to cause migration to take place, not "%Util".
The difference between the two is described as being due to volumes
that are offline (we had none of those) and already-migrated files
that are cached on the disk.  We had earlier turned off caching to
try to get out of this situation, so there should have been no files
cached.  (And the server put out a message 90 zillion times saying
that it was unable to free enough cache space.)  At this point, I have
no idea why 21% of the space was considered unmigratable, but clearly
we were never going to get above the 80% threshhold.

So, we seem to have a Catch-22 here.  We got out of the problem (with
IBM's assistance) by adding a bit more storage pool space and lowering
the migration threshhold and starting a MOVEDATA to force data to be
moved between from disk to tape.  It's not really clear which of these
actions got the situation unwedged, but things did break free.  This
afternoon, the %Migr again rose above the high threshhold, and a
migration process kicked in automatically.

Melinda Varian,
Princeton University
<Prev in Thread] Current Thread [Next in Thread>