ADSM-L

Re: Collocation and Disaster Recovery

1998-06-16 05:53:49
Subject: Re: Collocation and Disaster Recovery
From: John Tarella <gj_tarella AT IT.IBM DOT COM>
Date: Tue, 16 Jun 1998 09:53:49 +0000
There have been many recent appends on colocation for disaster recovery tapes,
with the objective of speeding up a potential recovery, by reducing the number
of tape mounts required for recovering a given node. Colocation works for local
tapes because it appends data on to a workstation's given tape, 10 workstations
wil have 10 tapes (for the sake of the argument).

If I colocate DR tapes, and produce DR tapes daily, I will produce 10 tapes
each day, and assuming they each stay on average 30 days in the the vault I
will end up with 10x30=300 "colocated" tapes in the vault. Not a very good way
to speed up recovery reducing tape mounts. On average it will take 30 mounts to
recover each node.

If I turn on reclamation for the colocated tapes I will have less tapes in the
vault but probably  will continue to perform reclamation for the same data day
after day, depending on the thesholds. A lot of work on the server side without
much benefit.

What are the alternatives?

No colocation and I will have the data striped across many tapes (30 at least).
I will probably not be able to perform restores in parallel because of tape
contention, when a process is restoring from tape a, another process requiring
the same tape will wait.

Another brute force alternative, would be to create a new disaster recovery
copypool every few days
for example
week 1 create new DR copypool DR1, move it to vault daily.
week 2 create new DR copypool DR2, move it to vault daily
week 3 create new DR copypool DR3, move it to vault daily, delete all volumes
in copypool DR1 and pool DR1
week 4 create new DR copypool DR1, move it to vault daily, delete all volumes
in copypool DR2 and pool DR2
and so on
Basically each week I do a full storage pool backup and then proceed with
incrementals. If the data on the primary pool is colocated the data in the copy
pools will be more or less contiguous.

Another alternative is to restore the non colocated DR pool to a colocated
primary pool before starting with the restores, but has the disadvantage of
wasting time.

A third alternative is to reduce the overhead caused by mounting and locating
data on tapes, and here IBMs Magstar MP drives are a hands down winner.

Regards, Giulio John Tarella
Consulting I/T Specialist
IBM Global Services