ADSM-L

Re: [ADSM-L] TSM for VE 6.4 Questions/Recommendations

2013-01-22 10:12:00
Subject: Re: [ADSM-L] TSM for VE 6.4 Questions/Recommendations
From: "Prather, Wanda" <Wanda.Prather AT ICFI DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Tue, 22 Jan 2013 15:05:14 +0000
Well, I'm sure the answer is "it depends", because it's all about the amount of 
data you have to move; but I I think most people will want a minimum number of 
data mover instances per data mover machine (real or virtual).

The "data mover" is just an instance of a TSM scheduler service running dsmc. 
It's actually the regular baclient that is providing that multi-threading.
And with 6.4, you get (prepare to cheer!) incremental forever backups, at the 
BLOCK level - the TSM design we know and love, only better.
You do that first full backup of the VM, then ever after you are just doing the 
changed blocks.
So the amount of data you have to move per day drops drastically.

I think most people will find it's better to have a minimum number of data 
movers with 6.4.  First there is the benefit of eliminating the general mess 
and difficulty and urge to scream involved in having a large number of 
schedulers and CADS active on 1 host.  But now you also have performance 
controls you can specify with parms on the client/data mover  to manage the 
impact of the load on the ESX host and the VMware datastore.  

So for example,  you create a schedule that specifies a backup of 12 VM's on 3 
ESX hosts, all to be done by 1 data mover/scheduler instance.
You can put parms in the dsm.opt that say, for example:
    Back up 5 at a time
    But only back up at most 2 concurrently from any one ESX host
    Only back up at most 3 concurrently from a given datastore 

With multiple data movers going against those 3 ESX hosts, you don't have that 
control.
W


-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of 
Ehresman,David E.
Sent: Tuesday, January 22, 2013 8:52 AM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Re: [ADSM-L] TSM for VE 6.4 Questions/Recommendations

Does the datamover's ability to run multiple threads in VE 6.4 mitigate the 
need for multiple data-movers per ESX?  Is it now reasonable to have a single 
data-mover per ESX?

David

-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of 
Prather, Wanda
Sent: Monday, January 21, 2013 8:36 PM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: [ADSM-L] TSM for VE 6.4 Questions/Recommendations

Hi Ken,

Yes, TSM/VE 6.4 lets you do backups of many VM's in parallel by making the data 
mover multi-threaded.
But each thread is still just doing one VMWare snapshot of each VM - snap, 
backup, remove the snap.

I have a customer with some non-TSM software which also makes VMware snapshots.
Having  multiple VMWare snapshots of the same VM in progress should be avoided.
Should it work OK? yes
Does it work OK?  not always
VMware is famous for getting tangled in its own underwear removing snapshots.  
V5 does it better than V4, but there's no reason to temp fate. 

Removing a snapshot requires merging all the changed blocks from the snapshot 
back into the VM image, and the more you have outstanding, the worse 
performance problem you are creating when you try to remove the snap.  We've 
seen snapshot removal processes render VM's completely non-responsive before.

This really has nothing in particular to do with TSM/VE, except the fact that 
TSM/VE is using VMware snapshots to do what it does.  
The same caveat applies to any two things, including human beings, that are 
trying to use VMware snapshots of the same VM at the same time.  I was merely 
pointing out that this would be a reason one might need to be careful when 
scheduling TSM/VE backups.  

W
   


-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of 
Kenneth Bury
Sent: Monday, January 21, 2013 7:48 PM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Re: [ADSM-L] TSM for VE 6.4 Questions/Recommendations

Wanda,

Please expand on your statement where you suggest "making sure we don't run 
backups at the same time that other software is running that also uses 
snapshots". One of the new features in TSM for VE v6.4 is the ability to run VM 
backups in parallel, many of them.

Ken


On Mon, Jan 21, 2013 at 6:43 PM, Prather, Wanda <Wanda.Prather AT icfi DOT 
com>wrote:

> I agree that using the "in guest" client in a VM is easier for back 
> up; the big deal is when it comes time to do a DR.
> Then having the VM image from TSM/VE is easier to do your full machine 
> restores with.  (I'm willing to do most anything to avoid having to 
> deal with a MS System State restore!)
>
> As far as scheduling, my customer uses the plugin to create the 
> schedule for the VM's.
>
> But, once you do that, go over to the TSM server/TIP and look at the 
> schedule it created; it's just a normal TSM client schedule, but with 
> a bunch of options that are specific to VE.  So you can doctor it on 
> the server side.
> That seems to be the easiest route - create with the plug in, tweak 
> with the TIP.
>
> Creating separate schedules for specific VM's is tricky because of VMotion.
> The beauty of being able to wild-card the schedule with the ESX 
> hostname, is that if/when VMotion moves a VM to another ESX host, the 
> backup for that ESX host will detect the "new" VM and it will still get 
> backed up.
>
> To set up schedules that are specific to a single VM, you are going to 
> have to be a little sneaky about using multiple schedules and 
> wildcards so that you still can back up that VM when it moves, and not 
> back it up multiple times.  Although that shouldn't be a big deal in 
> terms of data with these VE block-level backups (which are so 
> delightfully small!), I can foresee unpleasantness if you have 
> multiple backup runs creating overlapping VMWare snapshots.
>
> The only thing we've really had to work around, though, is making sure 
> we don't run backups at the same time that other software is running 
> that also uses snapshots.  We havent' tried to isolate a schedule for a 
> single VM.
>
> YMMV
>
>
> -----Original Message-----
> From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf 
> Of Mike De Gasperis
> Sent: Friday, January 18, 2013 2:16 PM
> To: ADSM-L AT VM.MARIST DOT EDU
> Subject: [ADSM-L] TSM for VE 6.4 Questions/Recommendations
>
> I posted this up on the adsm.org forum but I'm hoping I get more hits 
> here.
>
> We're getting ready to implement the agent at one of our facilities 
> and I'm curious to see how everyone else is accomplishing some things 
> we did relatively easy the "old" way.
>
> How are you scheduling certain VM's at different time frames? 
> Wondering if folks are using the TSM scheduler to do it and how that 
> looks or if the vCenter plugin has been an easier option for you.
>
> Policy and retention wise how are you handling VM's that may require a 
> longer retention than others? Wondering if folks are just using 
> separate management classes within the same policy or using multiple 
> data move nodes. I'm also wondering if you're using multiple 
> management classes are you just using includes within the dsm.opt or 
> specifying in the TSM schedule somehow?
>
> Collocation wise we're using physical tape to store these VM backups, 
> control data will be in a disk area. Are you collocating by file space 
> for these VM backups given the way they're stored or are you using 
> multiple data mover nodes to work what needs collocation what doesn't?
> My main concern is the file level recovery is painfully slow on 
> physical tape, going and buying a bunch of disk or VTL isn't a very 
> cost effective option for us unfortunately.
>
> I'm struggling to come up with the best practices on how to accomplish 
> these items which to me we did so simply before with the in guest 
> backup method. I know a lot of questions but recommendations and real 
> world experience on these items would be invaluable.
>
>


--
Ken Bury