ADSM-L

Re: [ADSM-L] SV: Suggestion for Archive?

2008-01-03 16:16:27
Subject: Re: [ADSM-L] SV: Suggestion for Archive?
From: Steven Harris <sjharris AT AU1.IBM DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Fri, 4 Jan 2008 08:14:33 +1100
Neil, that is an interesting approach to the problem and set me thinking.

How about this -

Set up an active pool on tape for this data.
On the designated day,  synchronize the active pool , take a database
backup, eject the active pool tapes and this DBB and send them off to the
vault.
Delete the volumes in the active pool with discarddata and start the sync
again.
To restore, use a separate TSM instance,  restore the DB and then restore
the data.

This takes the resource-intensive part of creating the copy out of the
critical path and allows it to be spread over the month.  The end of month
step only has to copy one day's data.  It will use some database space, and
even the deletion of the volumes is resource intensive  but it might be
manageable.

Steven Harris
TSM Admin
Sydney Australia

"ADSM: Dist Stor Manager" <ADSM-L AT VM.MARIST DOT EDU> wrote on 04/01/2008
03:44:37 AM:

> One issue with a monthly/yearly backup/archive is that the changes
> that occur between events will not be captured.  If a file is
> created on March 3rd and deleted March 25th, a monthly
> backup/archive that runs on the first of each month will not capturethis
file.
>
> One method of retaining all data that is backed up nightly would be to:
> - Create the node name of the client reflecting the time period that
> data is backed up - i.e. "docserver-march08".
> - At the end of each time period, change the node name to the new
> time period "docserver-April08"
> - run a "export node docserver-march08...." to tape and then put
> that tape in a safe place with associated recovery documentation.
> After the data has been successfully exported, delete all files
> associated with the exported node "docserver-march08".
> - Repeat for each time period.
>
> Your database will remain manageable.
> You will maintain daily recovery granularity.
> You MUST keep the recovery sequence documentation for exports which
> span multiple media.
> If you change media or backup platforms, you will have a bit of work
> importing and exporting to new media but, so it goes...
>
> Have a nice day,
> Neil Strand
> Storage Engineer - Legg Mason
> Baltimore, MD.
> (410) 580-7491
> Whatever you can do or believe you can, begin it.
> Boldness has genius, power and magic.
>
>
> -----Original Message-----
> From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On
> Behalf Of Henrik Wahlstedt
> Sent: Thursday, January 03, 2008 10:23 AM
> To: ADSM-L AT VM.MARIST DOT EDU
> Subject: Re: [ADSM-L] SV: Suggestion for Archive?
>
> Hi,
>
> Everybody seems to fancy backupsets, exports and archives. I think
> Lloyd is right here suggesting normal backup under a different nodename.
> Normal monthly/yearly backups wont punish your DB so much as
> archives and you should be able to live with one TSM server instance.
> It is your Filers that will kill TSM if you try to archive them
> monthly with 10 years retention... Just test and let me know if I am
> incorrect. :-)
>
>
> //Henrik
>
>
> -----Original Message-----
> From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On
> Behalf Of Lloyd Dieter
> Sent: den 3 januari 2008 16:01
> To: ADSM-L AT VM.MARIST DOT EDU
> Subject: Re: [ADSM-L] SV: Suggestion for Archive?
>
> Christian,
>
> Is the primary tape pool collocated?  If not, I'd strongly recommend
> it (although it's probably too late to help you with this issue).  I
> collocate all of my primary sequential access pools, and control
> usage with maxscratch and collocation groups.
>
> Do you have enough space in your disk pools to do a "move nodedata"
> from tape to disk pools in preparation for the generate backupset or
> export operation?
>
> The other option I've used is to define "monthly" or "yearly" nodes,
> with different node names, schedules and retention settings.  For
> example, "node_a" for the dailys, "node_a_monthly", "node_a_yearly".
> It's a nuisance to set up on the client side, but works well once
> it's in place and avoids database bloat.
>
> I think that you're going to have issues doing many archives like
> that, especially for your anticiptade growth...even if it gets you
> past the immediate issue.
>
> Don't forget that you can set up a second TSM instance on the same
> physical box...that might help for what your'e trying to accomplish.
>
> -Lloyd
>
>
> On Thu, 03 Jan 2008 13:58:20 +0100
> Christian Svensson <christian.svensson AT CRISTIE DOT SE> wrote thusly:
>
> > Hi Lloyd,
> > We have tried Backupset for a wild now but we see that it takes approx
> > 3 weeks to archive all 300 nodes. If the backupset fails then do we
> > need to restart the entire job and we are getting behind the schedule.
> > We looking at to create smaller "node groups" but still it takes a
> > long time. :(
> >
> > I was thinking to maybe setup a second TSM Server and export the data
> > from one server to the other and that maybe can reduce the time. I'm
> > guessing that the problem is probably all tape mounts that is required
> > to collocate all data so maybe that is something to look at?
> >
> > Or what do you think?
> > A good information to know is that the end-user looking at to grow up
> > to 100 TB in the next 4-5 years.
> >
> > Thanks
> > Christian
> >
> > -----Ursprungligt meddelande-----
> > Från: Lloyd Dieter [mailto:ldieter AT ROCHESTER.RR DOT COM]
> > Skickat: den 3 januari 2008 13:09
> > Till: ADSM-L AT VM.MARIST DOT EDU
> > Ämne: Re: Suggestion for Archive?
> >
> > Christian,
> >
> > This sounds like an excellent use for backupsets, or else possibly
> > periodic exports.
> >
> > Other sites have a second instance of TSM that is used for periodic
> > large backups and long-term retention requirements.
> >
> > I generally discourage archives for large amounts of data, due to the
> > DB entries that are created, as well as the amount of time required to
> > create those archives.  The only sites that I have with "out of
> > control" database growth are attempting to do what you describe.
> >
> > -Lloyd
> >
> >
> >
> > On Thu, 03 Jan 2008 11:09:13 +0100
> > Christian Svensson <christian.svensson AT CRISTIE DOT SE> wrote thusly:
> >
> > > Hi all,
> > >
> > > I hope you all had a great new year.
> > >
> > > Just a quick question.
> > >
> > > Has anyone tried to archive 20 TB data every mouth for 10 years? If
> > > yes how are you doing that and how is your environment looks like?
> > >
> > >
> > >
> > > Happy New year
> > >
> > > Christian
>
>
> -------------------------------------------------------------------
> The information contained in this message may be CONFIDENTIAL and is
> intended for the addressee only. Any unauthorised use, dissemination
> of the information or copying of this message is prohibited. If you
> are not the addressee, please notify the sender immediately by
> return e-mail and delete this message.
> Thank you.
>
> IMPORTANT:  E-mail sent through the Internet is not secure. Legg
> Mason therefore recommends that you do not send any confidential or
> sensitive information to us via electronic mail, including social
> security numbers, account numbers, or personal identification
> numbers. Delivery, and or timely delivery of Internet mail is not
> guaranteed. Legg Mason therefore recommends that you do not send
> time sensitive
> or action-oriented messages to us via electronic mail.
>
> This message is intended for the addressee only and may contain
> privileged or confidential information. Unless you are the intended
> recipient, you may not use, copy or disclose to anyone any
> information contained in this message. If you have received this
> message in error, please notify the author by replying to this
> message and then kindly delete the message. Thank you.
<Prev in Thread] Current Thread [Next in Thread>