Networker

[Networker] Cloning indexes and other data -- need advice

2003-09-23 18:54:39
Subject: [Networker] Cloning indexes and other data -- need advice
From: George Sinclair <George.Sinclair AT NOAA DOT GOV>
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Date: Tue, 23 Sep 2003 17:31:16 -0400
Hello,

I'm asking for some advice here as well as a few questions. We have some
very important data that has resided on as many as 4 clients. We need to
be able to reconstruct the data from any point in time as it existed on
any of the clients. The good news is that none of the tapes has been
recycled, but the bad news is that we are nearing the point where we may
need to recycle some of the affected tapes mostly because we need some
tapes but also because I'm thinking that ultimately we'll need to
migrate data from older media to newer as we phase out older equipment.
We don't need to worry about this with 90% of our backups, but this
particular data we do. I don't care about the rest of the data on those
tapes, but the data in question is vital. I'm thinking that we could
clone all the savesets from those 4 clients -- fulls, levels,
incrementals, everything -- which would be a good test of the viability
of the data and would condense all the needed data onto a smaller set of
tapes, allowing us to free up the originals, but there are two issues
here:

1. We might need to go back beyond the browse policy at some point. In
most cases, the data lived under a single file system that was as big as
300+ gig. That's a lot of data to read through when doing a saveset
recover, so being able to recover older versions of the client indexes
would save time. This got me thinking that we would need to clone the
client indexes also -- every one of them. Can you clone an index same as
any other saveset? Anything tricky that we should be aware of?

2. Much of the data has been backed up to LTO tape on an LTO library
managed by a Linux storage node server. The rest to SDLT tapes on a
second library also managed by the storage node server. The primary
server is running Solaris. The problem is that the /etc/stinit.def file
did not always exist on the storage node server. In fact, we ran the
libraries on there for several months before we created it, not
realizing that we needed it. When we first tried to recover some data
from LTO, that's when we found out that we needed it because we were
seeing all these messages about positioning by record disabled, block
size messages, etc. Adding the stinit.def file fixed the problems. We
never saw these problems, though, on the SDLT. So, what's going to
happen if I try to clone those older savesets that are on LTO from
before? I mean, I guess if I we're going to try to recover any data that
had been written to LTO before the change, I would first rename the
stinit.def file and then reboot the storage node server and then run the
recover, but if I do that to clone those older savesets, I don't think
they'll get written out right? Maybe it doesn't matter? Either way,
won't it be real slow?

3. Does anyone have any advice about this cloning saveset/index approach
to freeing up those tapes? From now on, starting with the next fulls,
we're thinking to backup the current data to its own pool of tapes so it
will be the only thing on those volumes. Then we won't have to worry
about non-related data being on those new tapes, and we'll never recycle
them. This will avoid this problem from reoccurring in the future.

Would appreciate any advice.

George
George.Sincalir AT noaa DOT gov

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

<Prev in Thread] Current Thread [Next in Thread>