This is a SERIOUS shortcoming of TSM, and something I've brought up
several times over the past 6 years! But the TSM folks just don't seem to
get it. I'll try again. I believe the situation as presented in the
original post, which comes up more and more often, could be solved easily
if we could have the following:
allow a nodename= parameter to be specified on the MOVE DATA command. THIS
IS NOT ANYWHERE CLOSE TO MOVE NODEDATA!
Justification: The design goal of TSM is to keep working. If you have a
stgpool with colocation=yes and, for whatever reason, you run out of
scratch tapes, TSM will effectively undermine colocation by writing data
to other tapes in the pool which are not full. OK, great. My backups ran
and data got migrated, but my stgpool is somewhat non-colocated and not
how it was designed. Assuming more scratch tapes have been added, the
solution is: MOVE DATA tape-volx NODENAME=somenode
allow a copypool= parameter to be specified on the DELETE FILESPACE
command.
Justification: Due to business needs, data from some node(s) now goes to
a new storage pool heirarchy. You can do MOVE NODEDATA on the primary
pool and then a BACKUP STGPOOL, but now we're left with 2 copies of data
in copypools. This is easily viewed as a waste of resources by management
for both the copypool media and the increased amount of disk needed for
the TSM database. Solution: DELETE FILESPACE somenode some.fs.spec
COPYPOOL=copypool
Thoughts?
Regards,
Al Barth
Henrik Wahlstedt <shwl AT STATOIL DOT COM>
Sent by: "ADSM: Dist Stor Manager" <ADSM-L AT VM.MARIST DOT EDU>
09/09/03 03:09 AM
Please respond to "ADSM: Dist Stor Manager"
To: ADSM-L AT VM.MARIST DOT EDU
cc:
Subject: Re: reorg offsite data - noncollocated to collocated
Right! But expiration will take care of both copypools, only when primay
data will expire... until then you will have 2 copies...
Med vänlig hälsning
Henrik Wahlstedt Statoil Phone: +46 8 429 6325
KTJ IT NED SE1 Torkel Knutssonsgatan 24 Mobile: +46 70 429
6325
118 88 Stockholm E-mail:
Sverige Henrik.Wahlstedt AT statoil DOT com
"Kamp, Bruce"
<bkamp AT MHS DOT NE To: ADSM-L AT VM.MARIST DOT
EDU
T> cc: (bcc: Henrik Wahlstedt)
Sent by: Subject: Re: reorg offsite
data - noncollocated to collocated
"ADSM: Dist
Stor Manager"
<ADSM-L AT VM DOT MA
RIST.EDU>
2003-09-08
18:52
Please
respond to
"ADSM: Dist
Stor Manager"
I am going through this also. The only way I could figure out was to let
expiration take care of it....
--------------------------------------
Bruce Kamp
Midrange Systems Analyst II
Memorial Healthcare System
E: bkamp AT mhs DOT net
P: (954) 987-2020 x4597
F: (954) 985-1404
---------------------------------------
-----Original Message-----
From: i love tsm [mailto:ilovetsm AT HOTMAIL DOT COM]
Sent: Monday, September 08, 2003 10:58 AM
To: ADSM-L AT VM.MARIST DOT EDU
Hi all
Hoping someone can help with this "opportunity" we have...
For a number of our existing clients we have changed them from using
non-collocated to collocated storage. Obviously the next step is to use
the
move nodedata command to move the old primary data across into the new
primary collocated storage pool. This data will then get backed up to the
new collocated copy storage pool we've set up.
The problem is how do I delete the data for the nodes that will still
reside
in the old non-collocated copy storage pool.? I don't want to use delete
volume because the data is spread over lots , i.e 70+, tapes, and these
tapes contain data for nodes we want to leave noncollocated.
Wish there was move nodedata command for copystgpools !
Any thoughts or ideas appreciated.
TIA
_________________________________________________________________
Find a cheaper internet access deal - choose one to suit you.
http://www.msn.co.uk/internetaccess
-------------------------------------------------------------------
The information contained in this message may be CONFIDENTIAL and is
intended for the addressee only. Any unauthorised use, dissemination of
the
information or copying of this message is prohibited. If you are not the
addressee, please notify the sender immediately by return e-mail and
delete
this message.
Thank you.
|