Adding a tape library to existing TSM for VE environment

Newcomer

ADSM.ORG Member
Joined
Dec 17, 2014
Messages
14
Reaction score
0
Points
0
Hi!

We are currently planning to add a tape library, possibly a TS3500 one to our existing TSM environment which currently consists of two TSM 7.1.1 servers running on physical RHEL servers. The current environment is somewhat a mess according to me. We are planning to replicate all the data across both TSM servers to have a first failover if one datacenter goes out.

The problem and reason that the tape library "has" to come into play is that the current servers are located in different datacenters. But the two datacenters reside in the same physical building. So if the entire building colapses we are in a rough spot.

We currently have one file storagepool with around 205 TB capacity, dedup at around 60% and 50% pct usage in the pool. The other server is around the same with 210 TB capacity and one file stgpool.

* We are using TSM for VE (7.1.1 atm) and from what i've read you can't migrate the controlfiles of the VM's from the disk pool out to tape? So i guess we need to look into VMCTLMC to keep them in the file pool.

* The current retentiontimes are 30 days for the VM backups. But they want to increase this to 365 days, (incremental forever) but they have also said that its only really the "local filesystems" that need this 365 days retention. And that the "system state" part of the VM backup can use 30 days retention.

This made me quite confused as how i would solve this "request". I'm not sure if its even possible to bind one mgmt class for the "systemstate" part of the snapshot and another class for the "local filesystems" on the servers??

Easiest for me would be to keep the entire backup for 365 days and just add another tape pool once the library is in place, then migrate from the file pool to the tape pool after 30 days. While keeping the controlfile in the filepool with the VMCTLMC option. Am i missing something or would it somehow be possible to split the retention times in some way that i haven't thought about?

We also aren't using a copy pool or backing up the current file stgpool. Which seems a bit strange to me, but i guess it's a decision when they started with the node replication..

Any insight or tips / trix when adding a tape library to the current envrionment is very much appreciated! :)

My idea where to upgrade the servers to 7.1.3 and make use of the new cloud container pool. And try to bypass the entire tape solution. I'm not sure what the best practise are with the new pool. If you also could send a database copy over to the cloud as an extra "disaster recovery" instead of sending it to tape and bring it offsite, following the entire DRM setup.
 
This made me quite confused as how i would solve this "request". I'm not sure if its even possible to bind one mgmt class for the "systemstate" part of the snapshot and another class for the "local filesystems" on the servers??
No, you can't separate the two.
Easiest for me would be to keep the entire backup for 365 days and just add another tape pool once the library is in place, then migrate from the file pool to the tape pool after 30 days. While keeping the controlfile in the filepool with the VMCTLMC option. Am i missing something or would it somehow be possible to split the retention times in some way that i haven't thought about?
Sounds good, except I'd consider using a disk pool instead. Random is better than sequential in this scenario, in the event you'd need to restore a large number of VMs, like in a disaster, you could get contention and slow down the restores.
My idea where to upgrade the servers to 7.1.3 and make use of the new cloud container pool. And try to bypass the entire tape solution. I'm not sure what the best practise are with the new pool. If you also could send a database copy over to the cloud as an extra "disaster recovery" instead of sending it to tape and bring it offsite, following the entire DRM setup.
A few things to consider with the new container pool:
- it's entirely tapeless
- only new backup data can go in the new container pool
- you cannot migrate data in or out of the container pool
- there is no backup stgpool, so you need to replicate the nodes to another server

You can still use that and would be the preferred method. As data expire out of the filepool, you can reallocate disk from the filepool to the container pool until the filepool is eventually empty.
 
Last edited:
Back
Top