>> On Thu, 23 Aug 2007 09:40:28 -0400, Keith Arbogast <warbogas AT INDIANA DOT
>> EDU> said:
> Knowing when they do and when they don't is the crux of my dilemma.
> The virtual volume methodology is presented in IBM designed training
> classes, Administrator's manuals, and in a very recent TSM Webcast
> as if it is the best practice for cross data center backups. There
> is much about how to do it, but precious little about why to do
> it. No mention of the problem that it is intended to solve. Or, how
> you might do the same thing as well or better without it.
I'm having difficulty figuring out why this still feels not-answered
to you. Perhaps the answer is best associated with a gradient of
solution costs, measured over time.
There are myriad different ways you might choose to transport bits
from site A to site B.. some of the simplest to understand are
courier, long-haul dark fiber, and IP.
Courier traffic is in many ways simplest, and quite cheap in dollars,
but has very poor operational characteristics: unusual traffic is
difficult to accomodate, delays are common, couriers are unreliable.
We never had an Iron Mountain audit of our stored volumes which passed
without incident. That made me feel very scared.
If you want to pay for the extra-expensive GBICs and miles of
dedicated dark fiber, you can just treat the "offsite" devices as
though they were onsite. This doesn't address many of the DR issues
at all (having data in two places, what about the hardware you intend
to use at disaster time, etc..) but is certainly much simpler to
manage if you're flush.
As expensive as this is right now, reel yourself back to 1997, and
wonder to yourself what it would cost to get oh, say, 350 miles of
dark fiber. IP connectivity would look pretty good then. (at least,
it did to me. :)
Since TSM is already focused on storing IP-communicated data (and
there are IBM projects accustomed to using TSM as an abstract object
store) it was obvious to use the skills and resources already
familliar to the DR-desiring supplicant to help him solve his problem.
Hey, presto: you want to do offsite stuff for your TSM, well all you
need is -ANOTHER- TSM, which you can manage in more or less the same
If you do this, your offsite data is available to you in real-time.
You can do last-minute pushes right until the hurricane actually busts
down your primary data center.
So, that's the problem it's intended to solve, and that's also why you
might want to use this particular tool to solve it.
As for your last concern, it's not IBM's job to show you how you might
do the work better without their product :) but biased as I am, I
think TSM's solution is damn near optimal.
> Our current goal is that each of our two data centers be the offsite
> backup and DR site of the other. If we lose a TSM server or a data
> center we would restore it at the one sixty miles away.
This seems perfectly reasonable. I recommend you become very familiar
with deploying several TSM instances on the same piece of hardware.
In that way, you can do all sorts of DR testing on hardware you
already own. You can restore all of SERVER-A onto an instance hosted
on SERVER-B's physical hardware, and prove to yourself that it can be
done consistently and according to procedure.
I'd recommend you have at least three different server instances on
each end of your link (which doesn't mean more than one set of
hardware)... One to serve the local clients, one to serve the
offsite-copy needs of the distal clients, and one as a library manager
to the other two, so when you discover the need to add yet-another
instance, it's straightforward.
- Allen S. Rout