Networker

Re: [Networker] Most efficient way to create duplicate clones?

2004-05-11 13:51:34
Subject: Re: [Networker] Most efficient way to create duplicate clones?
From: George Sinclair <George.Sinclair AT NOAA DOT GOV>
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Date: Tue, 11 May 2004 13:53:04 -0400
Thanks, Darren. Here's your answer, and I do have a few other questions
after thinking more about this.

Yes. I mean, each library uses its own SCSI card and respective cables
to connect to the storage node server. We have two libraries connected
to this storage node server: the Storagetek LTO and the ATL P1000 SDLT
library. We also have an older ATL P1000 DLT7000 drive library that's
attached to the server. Perhaps I was thinking of the slow times
involved in cloning tapes from the server's library over to the LTO or
SDLT? Clearly, in this case, you're dealing with a tremendous drive
mismatch in terms of read/write speed, not to mention the network for
transferring the data. Guess the network shouldn't be involved when
cloning between the SDLT and LTO??? I mean, communication between the
server and storage node would use network but data should not be going
over network, right?

I know I've cloned tapes before from the server's library over to either
of the other two libraries, and the times were much, much slower than
when going from the LTO to the SDLT or vice versa. Does that sound like
the behavior you'd expect in that case? I guess the network would be the
main culprit in that case and maybe not so much the drive mismatch?

HOWEVER, if cloning from one library to another, wherein both libraries
are connected to the SAME host, won't really deprecate the speed, then I
guess there's really no need for us to use two different clone pools
after all? Maybe just a waste of management? I don't see that it buys us
anything now that I think about what you said.

We have 2 SDLT drives and 4 LTO drives. My plan before, assuming the
savesets on the tapes did not span each other, was something like:

clone operation 1: SDLT drive 1 => SDLT drive 2
clone operation 2: LTO drive 1 => LTO drive 2
clone operation 3: LTO drive 3 => LTO drive 4

Now:

clone operation 1: SDLT drive 1 => LTO drive 1
clone operation 2: SDLT drive 2 => LTO drive 2
clone operation 3: LTO drive 3 => LTO drive 4

but I guess it really doesn't matter how you slice it, I'm not loosing
or gaining any more drive use by switching back to just one clone pool.

On the other hand, if I decided to simply clone the clone volumes, then
that would make it easy because I wouldn't have to figure out which
ssids to clone, since I would want to clone all of them. Having two
different clone pools might be nice because then I could clone anything
on an LTO to a SDLT and vice versa so that every saveset has both an
SDLT clone and an LTO clone to satisfy the "media diversity" factor.
Might feel safer having two copies on separate media types?

George

Darren Dunham wrote:
>
> > Thanks, Darren. Those are some good ideas. I think cloning the clone
> > will be what I'll go with. Either way, the data is being validated since
> > the original has to be read to be cloned anyway. BTW our SDLT and LTO
> > drives are not in the same libraries, but both libraries are managed by
> > storage node server. Just thought reading and writing would be faster in
> > done in same library?
>
> I don't see why.  Generally the drives within the library are directly
> wired into the host for communication.  The library isn't involved in
> the data path, so the physical location shouldn't have much to do with
> the speed.
>
> Is this true for your libraries?
> --
> Darren Dunham                                           ddunham AT taos DOT 
> com
> Senior Technical Consultant         TAOS            http://www.taos.com/
> Got some Dr Pepper?                           San Francisco, CA bay area
>          < This line left intentionally blank to confuse you. >
>
> --
> Note: To sign off this list, send a "signoff networker" command via email
> to listserv AT listmail.temple DOT edu or visit the list's Web site at
> http://listmail.temple.edu/archives/networker.html where you can
> also view and post messages to the list.
> =*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list.
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=