ADSM-L

Re: archive to tape ???

2004-03-02 10:24:38
Subject: Re: archive to tape ???
From: Bill Smoldt <smoldt AT STORSOL DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Tue, 2 Mar 2004 08:21:09 -0700
Michael,

What are you intending to archive on the disk array?  You can't see the
files on the disk in their native format.  You can't archive the disk
storage pools because they're locked by TSM.   Unless the native files are
stored on this disk array, you would have to archive from the clients, not
archive the data on the disk array.  Perhaps there is more to your archive
plan than you've mentioned?

The second data center configuration makes much more sense.  Presumably, you
have virtual volumes at each data center for the other.   A critical missing
element in what you're revealing to us is the speed of the link between the
two datacenters and the volume of data that changes daily.  Without that
information it's impossible to know if the configuration is realistic.

Bill Smoldt
STORServer, Inc.

-----Original Message-----
From: Michael D Schleif [mailto:mds AT HELICES DOT ORG]
Sent: Tuesday, March 02, 2004 7:45 AM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Re: archive to tape ???

* Steve Harris <Steve_Harris AT HEALTH.QLD.GOV DOT AU> 
[2004:03:02:16:18:51+1000]
scribed:
> Weird requirement.

Yes.

> Not something that I'd recommend. And I don't see the logic for having
> only part of the data, but, its an intellectual challenge as to how
> this can be done.

Their design is a bit more complex than I originally posted.  They have a
second data center (DC), and there, a second TSM using a second disk array.
TSM#1 in the main DC#1 is supposed to replicate itself in TSM#2 at DC#2.
DC#2 is supposed to house failover servers for all critical servers at DC#1.
In the event of catastrophic failure at DC#1, TSM#2 (and DRM#2?) are
supposed to recover to these failover servers at DC#2, and all will be back
online in a few hours.  I am not yet privy to the reality of this setup, and
I do not believe that this is fully functional as I write this; but, that is
their idea.

Also, they have already spent alot of money, and a parade of consultants
precede me.  They need to minimize cost to whatever they do that they are
not already doing.  I hope to demonstrate my value by implementing a sound,
and simple, and inexpensive tape solution -- then, I may have opportunity to
get them to question their overall strategy.

> Try this
>
> Set up a random diskpool big enough to hold one nights backup.  Point
backup at this.
> Set up a  main  sequential file diskpool. Make this the nextstg of the
nightly pool with manually controlled migration between the two.
> Each day, run a backup stg from the nightly pool to the tape pool and send
the tapes off site.  Then migrate the nightly pool  to the main pool.
> Script a tape return process keyed on the state and update date of the
drmedia table.
> When the tapes come back, run a delete vol discardd=yes on them.
<snip />

OK.  Thank you, for your ideas.

But, what about my idea to _archive_ from the disk array to tape?  Is that
not doable?  What are the flaws in this idea?  Comments?


> >>> mds AT HELICES DOT ORG 02/03/2004 13:05:26 >>>
<snip />

> The client says that they want to copy daily to tape only the most
> recent version of files that have changed since previous day.
>
> They will accept copy daily to tape all most recent file versions.
>
> Each morning, those tapes last written will be taken offsite, and
> tapes from seven (7) days ago brought back onsite and available.
>
> Furthermore, there are two (2) offsite locations, one for Windows
> platforms, and one for *NIX platforms.
>
>
> I am thinking that this can be accomplished by _archiving_ from the
> arrays to tape.  I am not clear how to specify policy.  Any ideas?
<snip />

--
Best Regards,

mds
mds resource
877.596.8237
-
Dare to fix things before they break . . .
-
Our capacity for understanding is inversely proportional to how much we
think we know.  The more I know, the more I know I don't know . . .
--

<Prev in Thread] Current Thread [Next in Thread>