ADSM-L

Re: Question - Migration

2006-05-23 12:02:37
Subject: Re: Question - Migration
From: Kelly Lipp <lipp AT STORSERVER DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Tue, 23 May 2006 10:01:39 -0600
It seems to me that you should be more worried about the backup stgpool
operations completing as this was a disaster recovery problem.  You are
correct in your analysis of how this works: large clients first and then
smaller ones.  

You should have an admin schedule that issues a backup stgpool operation
for all of your primary pools to your copy storage pool.  This ensures
that a copy of all newly arrived data is in the copy pool and can be
taken offsite.

Then, you should have an admin schedule that runs everyday which lowers
the migration thresholds on the primary disk pools to zero and keeps
them there until all of the data has been migrated.

Once these two schedules are done, you will have two copies of all of
your data on tape: one that stays onsite and one that goes offsite.  If
you have caching on on your disk pools, you will also have a copy of
some data in those pools as well.

The trick is getting all of this done every day between client backups.
You can have multiple admin schedules doing the backup stgpool
operations throughout the day, including while client backups are
running.  This might be a problem and it might not.  Depending on the
speed of your disk subsystem and how many clients backup simultaneously
you could get sufficient disk contention that will slow all operations.
Be careful.

You might also need more tape drives so you can get more work done
during the non-client backup window.

Since you mentioned this was a disaster test, I would concentrate on
getting my copy pools in a pristine state first.  After you get that
worked out you can focus on migration.

Once the copy pools are pristine (all data in the primary pools is in
the copy pool) then do a database backup and you are as prepared for a
disaster as you can be.


Kelly J. Lipp
VP Manufacturing & CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777
lipp AT storserver DOT com

-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of
Nancy L Backhaus
Sent: Tuesday, May 23, 2006 9:40 AM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: [ADSM-L] Question - Migration

Problem:   TSM doesn't have enough time during the day to drain all of
the
diskpools down to 0% before client backups start in the afternoon.
So,
on our declared disaster day, one of our critical small DB2 databases
was
still on disk never going to tape.   Because migration selects the
largest
client with the largest amont of data, that small database is stuck in
disk for multiple days never going to tape.    I could seperate the
critical servers into smaller disk pools as one solution, or collocate
by group node data.

Question>   Is there a way  to force all of a client nodes data in a
diskpool to migrate to tape even though it falls below the low migration
setting?     I see settings in update stgpool to keep data longer in
disk,
but I want any data that is left over from yesterday to migrate first
before starting with the largest client again.



Background:

TSM Server  Extended Edition 5.3.2.2
AIX Operating System 5.3



Our clients all back up to disk pools first , then we backup the
diskpools to onsite tape pools, then do a backup of the onsite tapepools
for disaster recovery and send those tapes to an offsite location.


Nancy Backhaus
Enterprise Systems
HealthNow, NY
716-887-7979

CONFIDENTIALITY NOTICE: This email message and any attachments are for
the sole use of the intended recipient(s) and may contain proprietary,
confidential, trade secret or privileged information.  Any unauthorized
review, use, disclosure or distribution is prohibited and may be a
violation of law.  If you are not the intended recipient or a person
responsible for delivering this message to an intended recipient, please
contact the sender by reply email and destroy all copies of the original
message.

<Prev in Thread] Current Thread [Next in Thread>