Networker

Re: [Networker] Cloning from Disk

2006-07-12 13:48:41
Subject: Re: [Networker] Cloning from Disk
From: Shawn Cox <shawn.cox AT PCCA DOT COM>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Wed, 12 Jul 2006 12:47:43 -0500
I do not believe that disk devices multiplex.  There is no need.

I can't tell from your message if you are cloning back to disk or to tape.
If to disk then contention is probably your culprit as during the clone you
are reading and writing at the same time whereas during your backup you are
only writing to the disk device.  If you are cloning from disk to tape, your
pathways are completely different.  It may be that your iSCSI and disk
throughput are faster than your scsi pathway to the tape device.

I have a similar setup to yours and typically see up to 60MB/sec writes from
many LAN clients to disk devices(mine are SAN attached not iSCSI).  But I
cannot stage to my LTO2 tapes faster than about 30MB/sec which is the limit
for my SCSI connection to the tape devices.

Are your tape devices on a single scsi channel or separate channels for each
device?
Compression could be a factor, especially if you didn't use compressasm at
the client level.
You may have some scsi errors between the server and the tape devices
slowing down the throughput.
Anything else going on during the clone?

-Shawn

-----Original Message-----
From: Legato NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU] 
On
Behalf Of Librado Pamintuan
Sent: Wednesday, July 12, 2006 12:10 PM
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Subject: [Networker] Cloning from Disk

Good morning to all,
Just need to clarify and ask some question regarding cloning.
We are currently using an AX100i Storage Array for our disk backup with 12x
7.2 RPM SATA disk drives.

On backup process, I'm getting a descent throughput of 25 to 30 MB/s, but
when I start the cloning session, the throughput drops down to 10 to 15
MB/s.
Here's a possible explanation on the throughput issue, during backup, data
is written to disks at random while on cloning, Legato NetWorker is
de-multiplexing the savesets from the disk before it saves it to tapes, thus
reducing the throughput. Is this how is supposed to be? Is there anyway to
speed up cloning? or this any other faster method/process to transfer the
data from disk to tape?


Environment:
Server: PowerEdge 2850 dual 3.2 GHz processor with 4 GB RAM
O/S: Red Hat ES 3

NICs: 2 Intel PRO/1000 MT connected directly to tha AX100i storage array
Tape Library/Drive: ADIC Scalar 100 with 2 LTO-2 tape drives (scsi
connectivity).


thanks in advance,


Librado Pamintuan
Technical Support Analyst II
Information Systems Dep.
Operations Group
City of Regina


Phone:          (306) 777-7573
General Fax: (306) 777-6804
eFax:             (306) 546-6002
eMail:            lpamintuan AT regina DOT ca



<html><body><font face = "arial" size = "1">
DISCLAIMER: The information transmitted is intended only
for the addressee and may contain confidential,
proprietary and/or privileged material. Any
unauthorized review, distribution or other use
of or the taking of any action in reliance upon
this information is prohibited. If you received
this in error, please contact the sender and

delete or destroy this message and any copies.
</font><body></html>

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type
"signoff networker" in the
body of the email. Please write to networker-request AT listserv.temple DOT edu 
if
you have any problems
wit this list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the
body of the email. Please write to networker-request AT listserv.temple DOT edu 
if you have any problems
wit this list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>