MOVE CLIENT DATA

TOYOTA

ADSM.ORG Member
Joined
Jan 9, 2006
Messages
22
Reaction score
0
Points
0
Location
south wales
I WANT TO SCHEDULE THE FOLLOWING COMMAND TO MOVE CLIENT DATA BECAUSE TSM IS HOLDING THE DATA ACCROSS 50+ TAPES.. WHEN YOU RUN THE COMMAND MANUALLY IT REQUEST YOU TO ENTER "YES" AT THE COMMAND LINE.. I WANT TO SCHEDULE THIS AS A JOB TO RUN ON THE WEEKEND.. WHATS THE BEST WAY FORWARD?



COMMAND I USE IS "Move nodedata ####### fromstg=ltobackpool"



#### = NAME OF THE CLIENT NODE..



HELP PLEASE :-o
 
When you run this command in batch mode, it doesn't ask for confirmation that you want to run it. You can run the command in batch mode by using the command line to issue the command.



"dsmadmc -id={admin} -password={password} move nodedata xxxxxx wait=yes"



I think if you setup an admin schedule, it should act the same.



-Aaron
 
WHEN IS THE BEST TIME TO SCHEDULE THIS TASK.. AFTER THE CLIENT BACKUPS AND RECLAIMATION.. OR CAN IT RUN ALONG THESE SCHEDULES??
 
You would want to run the MOVE NODEDATA command after the node has finished it's backup but before reclamation.



After the backup so that you don't have to turn around and do it right again. (may want to use node-level co-location to keep the data together in the future) Before the reclamation because after the move, you are going to have 50+ tapes with less data on them and more then likely would have to run the reclamation on those tapes as well.



-Aaron
 
oK.. I WILL STOP THE WEEKEND RECLAIMATION JOBS ON THE WEEKEND AND SCHEDULE THE MOVE DATA SCHEDULE.. AND LOCK THE CLIENT NODE..



HOW MANY MOVE DATA SCHEDULES COULD I RUN, IF THE OTHER CLIENTS ARE USING THE SAME STORAGE POOLS? OR JUST RUN 1 AT A TIME..



WHAT IS THE NODE LEVEL CO-LOCATION, WHAT ARE THE BENEFITS AND PIT FALLS OF THIS??







THANKS FOR YOUR HELP THIS..
 
You can run multiple move nodedata commands at the same time (one per node) but if the data is on the same tape, the other processes will wait for the one before it to finish using that tape.



Node level co-location tells TSM to keep all the node's data one as few tapes as possible. This is a benefit for restores as it doesn't have to mount as many tapes but it is not as good when performing the daily backup stgpool command as it will have to mount more tapes to perform the tape copy.



It all depends on where you would like the best speed. Do you need the speed for restores or for the daily processing.



-Aaron
 
THANKS .. I HAVE STOPPED OUR RECLAIMATION JOBS ON THE WEEKEND AND HAVE SCHEDULED A MOVE DATA SCHEDULE FOR X1 NODE..



SEE HOW THIS GOES.. I ASSUME THE MOVE DATA JOB WILL END ITSELF ONCE IT HAS MOVED ALL REQUIRED FILES ONTO TAPES..



HOW WOULD I SETUP CO-LOCATION ON THIS NODE IN QUESTION OR MUST IT BE SETUP CO--LOCATION ON THE STORAGE POOL ITSELF.. BUT THIS WILL AFFECT THE REMAINING 50+ NODES AS THEY ARE USING THE SAME POOL NAME IE LTOPRIMARY..







:p
 
In TSM 5.2 and below, the colocate option is set on Tape pools and can be set to yes/no/filespace meaning that its on for everything (all client data), nothing or based on filespace. This is good, but it is based on the STGPOOL level and sometimes you want to enable it for just a few clients and not the entire stgpool.



In TSM 5.3, you can enable colocation by group. You can define a colocation group and then add nodes to that group that you would like to have thier data colocated. Here is the entry in the 5.3 update notes:



Collocation by Group



Collocation by group is now supported. Groups of nodes can be defined, and the server can then collocate data based on these groups. Collocation by group can yield the following benefits:



* Reduce unused tape capacity by allowing more collocated data on individual tapes

* Minimize mounts of target volumes

* Minimize database scanning and reduce tape passes for sequential-to-sequential transfer



For newly defined storage pools, the default storage pool collocation setting is now GROUP.

Note:



During collocation processing the message ANR1142I will be replaced with ANR1176I.



See the Administrator's Guide for more information.



See the following new commands:



* DEFINE COLLOCGROUP

* DEFINE COLLOCMEMBER

* DELETE COLLOCGROUP

* DELETE COLLOCMEMBER

* QUERY COLLOCGROUP

* QUERY NODEDATA

* UPDATE COLLOCGROUP



See the following changed commands:



* DEFINE STGPOOL

* MOVE NODEDATA

* QUERY NODE

* QUERY STGPOOL

* REMOVE NODE

* UPDATE STGPOOL



-Aaron
 
oK - I HAVE THE SCHEDULES CREATED FOR THE WEEKEND TO MOVE THE DATA.. BUT HOW DO I CREATE A SCHEDULE TO STOP IT RUNNING AT A PARTICULAR TIME..



I WANT TO STOP IT SO I CAN PERFORM A BACKUP OF THE NODE..
 
Hi TOYOTA (No kidding?)



You don't even need another schedule to stop the 1st schedule. You could alway run:



> Upd SCH......PER=Onetime



To make your schedule just run once,then re-confirm from the actlog to make sure your schedule is done successfully, then run your backup.



Good luck



HP
 
What is the best approach for running this task on the off site copy pool tapes.. I dont want both sets primary & Copy tapes in the Library on the Weekend.. Any ideas..
 
You should be sending tapes offsite atleast once a day. You make the copies via backup stgpool and then send the copies offsite with move drmedia. You will also need to have reclamation setup so that the offsite tapes are rotated and kept fairly full.



If you need details on any of these steps, just ask.



-Aaron
 
Yes we move Copy tapes off site each day - but I have noticed that a large file print node has data spread across 50+ tapes.. I want to perform the move node data on these copy pool tapes onto fewer tapes..
 
Copy volumes will always have the data spread across multiple tapes as it will only make a copy of whatever data was backed up that night. Trying to keep copy volumes colocated might be possible, but I think that the effort is not worth it.



-Aaron
 
Back
Top