Converting to Node Replication

Lars-Owe

ADSM.ORG Member
Joined
Jan 19, 2015
Messages
18
Reaction score
0
Points
0
Location
Uppsala
Hello all!

We're contemplating the move to node replication instead of using copy pools. Doing it in a green field environment is pretty straightforward, but is there a Best Practice on how to do the conversion? Unfortunately we don't have enough space to store a third instance of everything.
 
I'd recommend you be at 7.1.1.100 or higher if a new fixpack is out before you implement it.

As far as best practices, there is some guidelines on the TSM Wiki:
https://www.ibm.com/developerworks/... Manager/page/Guidelines for node replication

If you have not consulted it yet, the manual has a good chapter on it too:
http://www-01.ibm.com/support/knowledgecenter/SSGSG7_7.1.1/com.ibm.itsm.srv.doc/c_repl.html

Just to clarify, there is no conversion. You would just have to configure and enable node replication, it will replicate the data for the nodes to your second TSM Server by reading the data from the primary copy on the source server and write it on primary storage pool volumes on the target server. As for the copy pools, if you want to stop using them, you'd have to stop doing storage pool backups, and then delete all copy pool volumes with discarddata=yes to return those tapes to scratch.
 
Thank you. Upgrading to 7.1.1 or later is a necessity before we can even consider dropping our copy pools.

We have two sites. Site A has a 3584 library with some 800 LTO-5 tapes for libman_1, and some 700 LTO-4 tapes for libman_3. TSM4 and TSM8 both run on the same p730 with 150 and 300TB of user data on LTO-5 as primary storage with 3592 as copy. There is also a small server running libman_1 and libman_3 for LTO-5 and LTO-4 respectively. The LTO library is almost full.

Site B has a 3584 library with almost 1000 3592 tapes for libman_2. TSM7 is running on p730 with 450TB of user data on 3592 as primary storage and LTO-4 for copy. We'd like a second TSM server on site B as well There is also a small server running libman_2. We also have a master server, TSMC, for enterprise configuration.

My idea was broadly to create a few new domains and a new set of storage pools without copy pools to be used with replication, and then move our nodes one at a time. This will take quite a while of course, but eventually we should get there without using more extra storage than the biggest node? As a bonus we'd get rid of libman_3/LTO-4 at the same time. Working within a constrained amount of storage space is the challenge here. For historical reasons we have quite a few domains, so we'd like to consolidate those anyhow, using the blueprint as inspiration.
 
Back
Top