ADSM-L

Re: [ADSM-L] Two different retention policies for the same node

2009-03-24 18:31:38
Subject: Re: [ADSM-L] Two different retention policies for the same node
From: Steven Harris <sjharris AT AU1.IBM DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Wed, 25 Mar 2009 09:25:07 +1100
Michael

migdelay of 7 will keep newly changed files for 7 days.  Older ones will
move off.  So your basic OS files and so on won't be there.  Only way
around that is to run a selective once a week or change MODE to absolute in
the backup copygroup on Sunday morning and back to modified on Monday
morning.  But if you are going to do that, you might as well buy Netbackup.

Regards

Steve



                                                                       
             Michael Green                                             
             <mishagreen@GMAIL                                         
             .COM>                                                      To
             Sent by: "ADSM:           ADSM-L AT VM.MARIST DOT EDU            
             Dist Stor                                                  cc
             Manager"                                                  
             <[email protected]                                     Subject
             .EDU>                     Re: [ADSM-L] Two different      
                                       retention policies for the same 
                                       node                            
             25/03/2009 08:55                                          
             AM                                                        
                                                                       
                                                                       
             Please respond to                                         
             "ADSM: Dist Stor                                          
                 Manager"                                              
             <[email protected]                                         
                   .EDU>                                               
                                                                       
                                                                       




Thanks Steven and Wanda!

Your ideas are very valuable!

I've been thinking about Steven's recipe number 2. It seems ok, but in
DR situation I wouldn't really want to mess with DB restores.

But what if..


1. Create a new domain for high priority servers (there are just a few
of them) with the same retention as before, but..
2. ...point the domain's copygroup destination to a separate DISK stgpool.
3. During incremental have simultaneous write put the active data to
offsite Active-data pool (of devt FILE, of course)
4. Migrate DISK stgpool down to an onsite FILE stgpool that will have
MIGR DELAY of 7.
5. Replicate that FILE stgpool (by the means of underlying storage)
to an offsite storage (with dedupe or without).

In DR situatiuon I'll have to:
1. Restore the DB
2. The active data is in the active-data pool
3. + 7 days old data is in the FILE stgpool.

How is that? Any flaws you can spot in this approach?
--
Warm regards,
Michael Green



On Wed, Mar 18, 2009 at 11:52 PM, Steven Harris <sjharris AT au1.ibm DOT com>
wrote:
> Michael,
>
> I have two ideas about your problem.
>
> Idea 1.  Create another domain for your high priority servers, with the 7
> day retention period.  Move the nodes into this domain. Create new nodes
> for these machines in the old domain with different names.  For machines
> performing an "normal" incremental run two backups every day, one to each
> node name.  For machines with too many files for that run a weekly
> incremental to the longer retention domain.  For databases/tdp nodes run
a
> weekly extra backup to the longer retention domain.  Explain carefully to
> management that the coverage of the longer retention period data now has
> holes in it.
>
> Idea 2.  The "7day" retention for offsite sounds like a simple-minded
> notion of what might be nice to have.  What they really want is an
> activepool but they aren't comfortable with it.  Create an activepool
daily
> for the high priority data.  Send it offsite to disk.  Set the DB
> expiration to not less than 7 days.  Set activepool pending delay to 7
> days. Continue to send your normal copypool tapes offsite.  Have a small
> library offsite too.
>
> If you have a disaster all your active data is instantly available.  If
you
> need day -n, restore the db for that day and the activepool data for that
> day will be available. Alternatively use your small library and restore
day
> -n data from the usual copypool tapes.
>
> One loose end.... if your ERP is SAP, the backups are actually TSM
> archives.  I'm not sure how Activepools work with archives and don't have
> the time to look it up now.
>
> Regards
>
> Steve.
>
>
> Steven Harris
> TSM Admin, Sydney Australia
>
>
>
>
>             Michael Green
>             <mishagreen@GMAIL
>             .COM>                                                      To
>             Sent by: "ADSM:           ADSM-L AT VM.MARIST DOT EDU
>             Dist Stor                                                  cc
>             Manager"
>             <[email protected]                                     Subject
>             .EDU>                     Re: [ADSM-L] Two different
>                                       retention policies for the same
>                                       node
>             19/03/2009 12:27
>             AM
>
>
>             Please respond to
>             "ADSM: Dist Stor
>                 Manager"
>             <[email protected]
>                   .EDU>
>
>
>
>
>
>
> On Tue, Mar 17, 2009 at 11:18 PM, Conway, Timothy
> <Timothy.Conway AT jbssa DOT com> wrote:
>> Is this to avoid having two copypools?  That's a reasonable goal.   I
>> have only one copypool, which is my DR offsite pool.  Just make your
>> onsite copypool an offsite pool, and you can give them 25 times better
>> than they're asking for.
>
> No, the idea is to keep offsite 7 days history for very few most
> important servers (ERP, HR) on disk.  I don't much care if that will
> be primary pool or copy pool. As long as I can get my data back off it
> - it's fine.
> Today, I manage 3 servers here and am sitting on 0.5 peta of backup
> data. There is no point to have all that data (most of which is
> inactive) at DR site (we do have offsite vault though). At DR site we
> want to keep preconfigured turn-key ERP, HR servers, a preconfigured
> TSM server with its database and SAN or NAS attached disk that has the
> 7-days history. I have yet to work out how and by what means my 140GB
> database will get to DR site on daily basis. Maybe we will use a
> dedupe or maybe we will open a separate TSM instance  just for these
> few servers so that the DB that we will have to copy to DR site will
> be as small as possible. Also the smaller  DB, the better in DR
> situation.
>
>> Unless most of the data changes every day, the difference between 7 days
>> and 180 days worth of copypool is remarkably small.
>
> It can be big. ERP server backs up over 100G nightly. I guess it
> dedupes pretty well though.
>
>
>> If you have no copypool at all, the whole thing is a farce.
>> If they're wanting fast portable full restores of a subset of the total
>> nodes, how about backupsets?  Make a nodegroup containing all the nodes
>
> Backup sets are fine as long as tey are relatively small and you don't
> have to generate them on daily basis. Imagine your ERP is about 400GB
> worth of active data and you have to generate backup set that big on
> daly basis? I don't even know yet what kind of bandwidth I'll have to
> our DR location. Assuming I get backupset generated in 4-5 hours, how
> many hours will be required to send it off?  Also what happens if then
> the managment decides they want a few more machine to join the first
> one at DR location? This solution sound like a nice idea TSM-wise, but
> imho it's not very scalable otherwise. As it looks to me the best
> approach is to backup locally, dedupe, send it off deduped.
>
>
>> they want daily fulls of, and make a backupset of that nodegroup every
>> day.  Give the backupset a 7 day retention, and keep track of the
>> volumes that are in the list of backupset volumes from one day that
>> disappear the next (simple to script).  That same script can note tapes
>> that show up in the list of backupset volumes that weren't there the day
>> before, and check them out of the library and throw your operations team
>> an email listing every tape to be sent offsite and to be recalled.  I
>> find that I can generate file-class backupsets(with TOC) at about 27MB/S
>> - 8.5 hours to do an 814GB node, single-threaded.
>>
>