Networker

Re: [Networker] clone parallelism

2004-10-03 12:42:49
Subject: Re: [Networker] clone parallelism
From: Darren Dunham <ddunham AT TAOS DOT COM>
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Date: Sun, 3 Oct 2004 09:59:35 -0700
> >>i have a short question about cloning and the parallelism setting of the
> >>drives. Is the following configuration possible?
> >>
> >>Backup to 1 drive  with a parallelism of 4 (Diskbackup)
> >>Clone  to 4 drives with a parallelism of 1 (DLT8000)
> >
> >
> > You can do that, but it would likely take a long time.  It's possible
> > that the first clone off the source volume would take four times as long
> > as the backup (due to having to read through the tape four separate
> > times to serialize the clone).
>
> I thought it could work like a normal backup. The more drives i use the
> faster it goes. The clone should be written !only one time! from
> diskbackup to dlt8000 and if i use 4 dlt8000 drives it is faster then
> with only 1 drive.

Well, that would depend on my actually recognizing the diskbackup line.
I've simply never used parallelism in a discussion about diskbackup
before.

Certain tapes are *very* fast, but supposing you have the throughput to
support it on the disk back end (dlt8000s aren't so fast), you should be
able to support cloning at tape speed.

--
Darren Dunham                                           ddunham AT taos DOT com
Senior Technical Consultant         TAOS            http://www.taos.com/
Got some Dr Pepper?                           San Francisco, CA bay area
         < This line left intentionally blank to confuse you. >

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list. Questions regarding this list
should be sent to stan AT temple DOT edu
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

<Prev in Thread] Current Thread [Next in Thread>