ADSM-L

Re: [ADSM-L] Fw: DISASTER: How to do a LOT of restores?

2008-01-22 11:14:56
Subject: Re: [ADSM-L] Fw: DISASTER: How to do a LOT of restores?
From: Roger Deschner <rogerd AT UIC DOT EDU>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Tue, 22 Jan 2008 10:14:07 -0600
MOVE NODEDATA looks like it is going to be the key. I will simply move
the affected nodes into a disk storage pool, or into our existing
collocated tape storage pool. I presume it should be possible to restart
MOVE NODEDATA, in case it has to be interrupted or if the server
crashes, because what it does is not very different from migration or
relcamation. This should be a big advantage over GENERATE BACKUPSET,
which is not even as restartable as a common client restore. A possible
strategy is to do the long, laborious, but restartable, MOVE NODEDATA
first, and then do a very quick, painless, regular client restore or
GENERATE BACKUPSET.

Thanks to all! Until now, I was not fully aware of MOVE NODEDATA.

B.T.W. It is an automatic tape library, Quantum P7000. We graduated from
manual tape mounting back in 1999.

Roger Deschner      University of Illinois at Chicago     rogerd AT uic DOT edu


On Tue, 22 Jan 2008, Nicholas Cassimatis wrote:

>Roger,
>
>If you know which nodes are to be restored, or at least have some that are
>good suspects, you might want to run some "move nodedata" commands to try
>to get their data more contiguous.  If you can get some of that DASD that's
>coming "real soon," even just to borrow it, that would help out
>tremendously.
>
>You say "tape" but never "library" - are you on manual drives?  (Please say
>No, please say No...)  Try setting the mount retention high on them, and
>kick off a few restores at once.  You may get lucky and already have the
>needed tape mounted, saving you a few mounts.  If that's not working (it's
>impossible to predict which way it will go), drop the mount retention to 0
>so the tape ejects immediately, so the drive is ready for a new tape
>sooner.  And if you are, try to recruit the people who haven't approved
>spending for the upgrades to be the "picker arm" for you - I did that to an
>account manager on a DR Test once, and we got the library approved the next
>day.
>
>The thoughts of your fellow TSMers are with you.
>
>Nick Cassimatis
>
>----- Forwarded by Nicholas Cassimatis/Raleigh/IBM on 01/22/2008 08:08 AM
>-----
>
>"ADSM: Dist Stor Manager" <ADSM-L AT VM.MARIST DOT EDU> wrote on 01/22/2008
>03:40:07 AM:
>
>> We like to talk about disaster preparedness, and one just happened here
>> at UIC.
>>
>> On Saturday morning, a fire damaged portions of the UIC College of
>> Pharmacy Building. It affected several laboratories and offices. The
>> Chicago Fire Department, wearing hazmat moon suits due to the highly
>> dangerous contents of the laboratories, put it out efficiently in about
>> 15 minutes. The temperature was around 0F (-18C), which compounded the
>> problems - anything that took on water became a block of ice.
>> Fortunately nobody was hurt; only a few people were in the building on a
>> Saturday morning, and they all got out safely.
>>
>> Now, both the good news and the bad news is that many of the damaged
>> computers were backed up to our large TSM system. The good news is that
>> their data can be restored.
>>
>> The bad news is that their data can be restored. And so now it must be.
>>
>> Our TSM system is currently an old-school tape-based setup from the ADSM
>> days. (Upgrades involving a lot more disk coming real soon!) Most of the
>> nodes affected are not collocated, so I have to plan to do a number of
>> full restores of nodes whose data is scattered across numerous tape
>> volumes each. There are only 8 tape drives, and they are kept busy since
>> this system is in a heavily-loaded, about-to-be-upgraded state. (Timing
>> couldn't be worse; Murphy's Law.)
>>
>> TSM was recently upgraded to version 5.5.0.0. It runs on AIX 5.3 with a
>> SCSI library. Since it is a v5.5 server, there may be new facilities
>> available that I'm not aware of yet.
>>
>> I have the luxury of a little bit of time in advance. The hazmat guys
>> aren't letting anyone in to asess damage yet, so we don't know which
>> client node computers are damaged or not. We should know in a day or
>> two, so in the meantime I'm running as much reclamation as possible.
>>
>> Given that this is our situation, how can I best optimize these
>> restores? I'm looking for ideas to get the most restoration done for
>> this disaster, while still continuing normal client-backup, migration,
>> expiration, reclamation cycles, because somebody else unrelated to this
>> situation could also need to restore...
>>
>> Roger Deschner      University of Illinois at Chicago     rogerd AT uic DOT 
>> edu