ADSM-L

Controlling nodes - storage pool migrations

2003-10-07 10:27:04
Subject: Controlling nodes - storage pool migrations
From: Roy P Costa <roycosta AT US.IBM DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Tue, 7 Oct 2003 10:10:04 -0400
We have storage groups set up for various parts of our business, primary
pools being to dasd and migrating to tape in an automated Tape Library
(ATL).  Most of the nodes in the different storage groups are file servers,
some holding many gigabytes of data.  As I understand from the TSM/ADSM
documentation and experience, when a dasd storage group meets the migration
criteria (High Mig), TSM looks for the node that has the MOST data in the
storage pool and then proceeds to migrate ALL of that node's filespaces
from dasd to tape before it checks to see if the migration low threshold
has been met.  We currently have the situation that the node with the most
data has over 200G of data on dasd, causing us to run out of tapes in the
ATL (and bringing down the dasd usage to well below the low threshold).
I've tried to make the Migration Delay = 30 days, which I believe will
check to see if ALL the files for a particular node have not been touched
in 30 days before migrating ALL of the node's files.  If this is true, then
these fileservers will never have all of their files 30 days old, since
files are updated regularly and I would need to have Migration Continue =
yes to avoid the dasd storage pools from filling totally.
If my assumptions are correct, throwing more dasd at the dasd storage pools
will only make these migrations even larger.
Is there a way to tell TSM to not migrate the node with the MOST data or
some workaround that gives us better control of these migrations?  I'm
willing to explore and test any suggestions that you may have.


Roy Costa
IBM International Technical Support Organization

<Prev in Thread] Current Thread [Next in Thread>
  • Controlling nodes - storage pool migrations, Roy P Costa <=