ADSM-L

Re: Controlling nodes - storage pool migrations

2003-11-08 16:00:31
Subject: Re: Controlling nodes - storage pool migrations
From: Zlatko Krastev <acit AT ATTGLOBAL DOT NET>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Sat, 8 Nov 2003 22:45:44 +0200
There is no method to convince the TSM server to skip the biggest node
(AFAIK). But the workaround is quite straightforward - define separate
stgpool for that node and use the existing stgpool for the rest. Also the
node with 200 GB might be very good candidate for direct-to-tape
backups/restores. Putting it into separate stgpool will ensure "manual"
collocation and good restore times.

--> I believe will check to see if ALL the files for a particular node have
not been touched in 30 days before migrating ALL of the node's files.

Incorrect. The files which are backed up more than 30 days ago will be
migrated, while the recently backed up files will stay in the stgpool. If
you look at the description of "def stg" command, you will see that
MIGContinue parameter is honoured when storage pool is filled with files
newer than MIGDelay period!

As not all your assimptions are correct and there are too many ways to
skin the same cat you can avoid adding dasd endlessly.

Zlatko Krastev
IT Consultant






Roy P Costa <roycosta AT US.IBM DOT COM>
Sent by: "ADSM: Dist Stor Manager" <ADSM-L AT VM.MARIST DOT EDU>
07.10.2003 17:10
Please respond to "ADSM: Dist Stor Manager"


        To:     ADSM-L AT VM.MARIST DOT EDU
        cc:
        Subject:        Controlling nodes - storage pool migrations


We have storage groups set up for various parts of our business, primary
pools being to dasd and migrating to tape in an automated Tape Library
(ATL).  Most of the nodes in the different storage groups are file
servers,
some holding many gigabytes of data.  As I understand from the TSM/ADSM
documentation and experience, when a dasd storage group meets the
migration
criteria (High Mig), TSM looks for the node that has the MOST data in the
storage pool and then proceeds to migrate ALL of that node's filespaces
from dasd to tape before it checks to see if the migration low threshold
has been met.  We currently have the situation that the node with the most
data has over 200G of data on dasd, causing us to run out of tapes in the
ATL (and bringing down the dasd usage to well below the low threshold).
I've tried to make the Migration Delay = 30 days, which I believe will
check to see if ALL the files for a particular node have not been touched
in 30 days before migrating ALL of the node's files.  If this is true,
then
these fileservers will never have all of their files 30 days old, since
files are updated regularly and I would need to have Migration Continue =
yes to avoid the dasd storage pools from filling totally.
If my assumptions are correct, throwing more dasd at the dasd storage
pools
will only make these migrations even larger.
Is there a way to tell TSM to not migrate the node with the MOST data or
some workaround that gives us better control of these migrations?  I'm
willing to explore and test any suggestions that you may have.


Roy Costa
IBM International Technical Support Organization

<Prev in Thread] Current Thread [Next in Thread>
  • Re: Controlling nodes - storage pool migrations, Zlatko Krastev <=