ADSM-L

Re: Restore of AIX/Linux servers at DR test

2006-05-30 14:11:32
Subject: Re: Restore of AIX/Linux servers at DR test
From: "Meadows, Andrew" <AMeadows AT BMI DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Tue, 30 May 2006 10:57:18 -0500
We are experiencing the same issue with the tape being the bottle neck.
I have thought through this and the only thing I can think of is
creating 2 offsite copies so the servers have 2 tapes instead of 1 to
fight over... This is about all I could come up with unfortunately, and
I don't even know if this will work honestly. 



-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of
Richard Mochnaczewski
Sent: Tuesday, May 30, 2006 10:32 AM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Restore of AIX/Linux servers at DR test

Hi Everybody,

We are investigating ways of improving the way our servers can be
restored at a DR test. For AIX/Linux servers once the server's OS has
been restored, how has everyone's experience been at restoring
application filesystems on UNIX based systems ? For instance, let's say
you have 20 servers having data ( aside from the OS ) of between 100 and
150Gb. What is the best way to restore the data ? We have caching turned
on the storage pools so some of the data is restore from our storage
pools, but the majority of data is on tape and that tapes seem to be the
bottleneck. We tried using collocation on two servers, but by using
collocation and when multiple restores were launched on different
filesystems, TSM treats them as a classic restore and the restore
becomes single-threaded. 

Rich
********************************************
This message is intended only for the use of the Addressee and
may contain information that is PRIVILEGED and CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified
that any dissemination of this communication is strictly prohibited.

If you have received this communication in error, please erase
all copies of the message and its attachments and notify us
immediately.

Thank you.
********************************************

<Prev in Thread] Current Thread [Next in Thread>