TSM for VE - Scheduling,Collocation, Policy Questions

Mikey D

ADSM.ORG Member
Joined
Mar 21, 2007
Messages
272
Reaction score
8
Points
0
Location
Southeast Michigan
We're getting ready to implement the agent at one of our facilities and I'm curious to see how everyone else is accomplishing some things we did relatively easy the "old" way.

How are you schedules certain VM's at different time frames? Wondering if folks are using the TSM scheduler to do it and how that looks or if the vCenter plugin has been an easier option for you.

Policy and retention wise how are you handling VM's that may require a longer retention than others? Wondering if folks are just using separate management classes within the same policy or using multiple data move nodes. I'm also wondering if you're using multiple management classes are you just using includes within the dsm.opt or specifying in the TSM schedule somehow?

Collocation wise we're using physical tape to store these VM backups, control data will be in a disk area. Are you collocating by file space for these VM backups given the way they're stored or are you using multiple data mover nodes to work what needs collocation what doesn't?

I'm struggling to come up with the best practices on how to accomplish these items which to me we did so simply before with the in guest backup method. I know a lot of questions but recommendations and real world experience on these items would be invaluable.
 
How are you schedules certain VM's at different time frames? Wondering if folks are using the TSM scheduler to do it and how that looks or if the vCenter plugin has been an easier option for you.

We use an external scheduler that schedules at the "ESX" host level to run at a particular day/time

Policy and retention wise how are you handling VM's that may require a longer retention than others? Wondering if folks are just using separate management classes within the same policy or using multiple data move nodes. I'm also wondering if you're using multiple management classes are you just using includes within the dsm.opt or specifying in the TSM schedule somehow?

Our setup involves 4 virtual datamovers, all configured identically. We run a batch backup script for each VM. When the script is created, it determines what 'TSM node name' to use to perform the backup.

If non-production - NODE_NAME1
If production - NODE_NAME2
if production archive - NODE_NAME3

This allows us to set a specified amount of retention time using policy/domain/copygroups


Collocation wise we're using physical tape to store these VM backups, control data will be in a disk area. Are you collocating by file space for these VM backups given the way they're stored or are you using multiple data mover nodes to work what needs collocation what doesn't?

We are using VTL to store our data and then copy off to tape for offsite backup.
We store the image data and control data in the same storage pool that is collocated by filespace.


I'm struggling to come up with the best practices on how to accomplish these items which to me we did so simply before with the in guest backup method. I know a lot of questions but recommendations and real world experience on these items would be invaluable.

We have only been live with our production environment for about 1 year now.
Hope this helps....
 
Hi nstanley


I am struggling to set different retention times for the data I do backup with TSM for VE, as you seggested.

How do you do that?

The DM presents itself as VMWARE Data Center node, and each VM is a file space within it.

The TSM for VE is registered to work with this Data Center node (at vmcliprofile)

We are backing up IFULL daily and have a retention for 45 days, but we need to retaing backups (could be full backups) monthly for long term retention.

Would you put some light on this??

I take the chance to thank Mike De Gasperis, Wanda Prather, and all whom feeded the post at http://adsm.org/lists/html/ADSM-L/2013-01/msg00108.html

that is a great post and helped me to understand how all this works.

tks, Nicolas.
 
I am struggling to set different retention times for the data I do backup with TSM for VE, as you seggested.

How do you do that?

We control the retention at the tsm server level with our domain/policy/management class/copygroup setup

The DM presents itself as VMWARE Data Center node, and each VM is a file space within it.

The TSM for VE is registered to work with this Data Center node (at vmcliprofile)

We are backing up IFULL daily and have a retention for 45 days, but we need to retaing backups (could be full backups) monthly for long term retention.

Would you put some light on this??

I take the chance to thank Mike De Gasperis, Wanda Prather, and all whom feeded the post at http://adsm.org/lists/html/ADSM-L/2013-01/msg00108.html

that is a great post and helped me to understand how all this works.

tks, Nicolas.

Using a different "-asnodename=short_term_retention_node" or "-asnodename=longer_term_retention_node" we are able to keep different backups for different periods of time.

We perform a monthly backup of production vm's using a specific node name that keeps the images for 12 months.
Our normal daily/weekly backups use a different node name that keeps the images for 45 days.

Hope this makes better sense.
 
nstanley,

thank you for your prompt reply. I am currently working on this, and appreciate your feedback.

I figured out what you said but still wonder about the following:
a) Should I only specify the different asnode name only from the command I launch from the Data Mover (given all grant proxy, of course)???
b) Does the fact that vmcliprofile specified a different DataCenter node, has not effect at all??

No need to answer. Just to let you know where me mind is about this config.


If is that easy (or simple) confirm me the following (please):

Steps to config TSM for VE with short and long term retention to protect VMs.

a) setup TSM for VE following install & user guide, and deploy incremental forever backups (i.e. data retention for 30 days). Execute IFULL backups daily.
q proxy is as follows after this initial deployment:
Target Node Agent Node
--------------- ---------------------------------------------
VE_DC VE_DC_DM VE_VMCLI
VE_VCENTER VE_DC VE_VMCLI

b) Register another NODE for be used as the Data Center node for long term retention (i.e. name it VE_DC_LONGTERM) at a different policy domain that has same mclasses names (is not mandatory but using same mclasses name will easy dsm.opt config at the data mover) with different retentions and pointing to different storage pools (this is the important thing here).

c) grant proxynode to enable this new node to work:

q proxy will be as follows
Target Node Agent Node
--------------- ---------------------------------------------
VE_DC VE_DC_DM VE_VMCLI
VE_DC_LONGTERM VE_DC_DM VE_VMCLI
VE_VCENTER VE_DC VE_DC_LONGTERM VE_VMCLI

d) as you said:
Using a different "-asnodename=short_term_retention_node" or "-asnodename=longer_term_retention_node" we are able to keep different backups for different periods of time.

So I use the backup/archive client data mover (in my case is VE_DC_DM) as node (using the node I have just created for long term retention of the data center - in my case VE_DC_LONGTERM which belongs to a different policy with diff retention and diff storage pool destination) and perform a VM FULL BACKUP once a month.


Am I missing or misunderstanding something?


I figure out that would be appropriate also to use a different Data Mover (for performance, schedule flexibility and others) but has no real effect on the short / long retention differentiation I am after at.


Thanks again.

Nicolás.
 
I think you got it.

soooooo.......for clarification - - - we have 4 physical datamover servers. (DM01,DM02,DM03,DM04)

we have 3 "VC" node names configured in TSM.

VC1_DC1 production standard weekly full/incr backups with 45 day retention according to management class and copygroup.
VC1_DC2 production monthly full backups with 365 day retention according to management class and copygroup.
VC1_DC3 non-production standard weekly full/incr backups with 45 day retention according to management class and copygroup.

each of the physical DM's have grant proxy to all VC* nodes.

normal backups use "-asnodename=VC1_DC1" on the dsmc command.
monthly backups use the "-asnodename=VC1_DC2" on the dsmc command.
non-production backups use "-asnodename=VC1_DC3" on the dsmc command.

since the DM's are configured identically (down to the dsm.opt files and backup scripts.), our external scheduler can run the backup script anywhere and the proper backup is completed.

********************
One thing to note when implementing this scenerio is that the "monthly archive" is not really an "archive" command - right?
The only thing that makes it an "archive" is the different node name and associated policy that keeps it longer than normal backups. ie:

Normal backups:
dsmc backup vm "vm_name" -asnodename=VC1_DC1 -mode=(full or incr)

Archive backups:
dsmc backup vm "vm_name" -asnodename=VC1_DC2 -mode=full

Non-Prod:
dsmc backup vm "vm_name" -asnodename=VC1_DC3 -mode=(full or incr)

If your ESX host support CBT (Change block tracking) then you have to be careful with the scheduling of your backups.
A "-mode=full" backup updates the CBT tracking information on the VM itself.

We implemented this scheduling to accommodate this issue.

Day1 - archive (VC1_DC2) (cbt reset)
Day2 - weekly full (VC1_DC1) (cbt reset)
Day3-... - incrementals (VC1_DC1) (cbt NOT reset)

this was so that whenever we did an archive under a different node name we were not expecting an incremental to run under the original node name.


hope I am not rambling.....
 
Hi!
I'm using tsm4ve.
Is it possible to have...?

- Two different VM backup schedules defined in the GUI (for two arbitrary groups of VMs, on the same datacenter)
- Different retention periods (in days) for each group / schedule.
- Share the storage pool for those two backups.
- Use sched mechanism to make those backups (instead of launching the "dsmc backup vm" command from external cron).

TIA
jorge infante
rosario - santa fe - argentina
 
Back
Top