Networker

Re: [Networker] Weird parallelism issue?

2010-05-23 21:11:11
Subject: Re: [Networker] Weird parallelism issue?
From: "Brian O'Neill" <oneill AT OINC DOT NET>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Sun, 23 May 2010 21:08:27 -0400
I think I know a bit about what happened now.

The server parallelism was set to 16. It queued up 16 streams, even though there was only one active drive. 4 went to the drive, and the other twelve resulted in "waiting for 12 writable tapes" messages. When the initial 4 finished, others kept filling the slots (presumably because they hadn't been assigned to the 15 initial slots).

After 30 minutes, it apparently times out the waiting slots and then re-enters them, so I went back up to 10 waiting tapes.

I set server parallelism to 4, and when the next 30 minute period occured, they all released their slots and now it isn't complaining about waiting tapes, so the remainder of them should get moving on the tape faster.


On 5/23/2010 6:58 PM, Brian O'Neill wrote:
I'm running 7.4.5 on a CentOS 4 box.

Normally my backups go to a VTL first, but the VTL is offline so I'm
having the backups go direct to tape tonight. I only have one LTO-3 tape
drive operating as well (usually it is used for cloning/staging, and the
second drive started acting up today as well).

When the savegrp first runs, 4 backup streams write to the tape as
expected for the device parallelism setting. But after the initial four
are completed, it looks like only a single stream is permitted. I'm not
sure what exactly is happening...it seems like each additional client
that was waiting for a writable Default tape comes out of that state one
at a time, with the mounted tape having to reverify its label each time,
thus slowing things way down.

I checked other parallelism settings I could find. Server is set to 16,
while savegrp is set to 0. Only one savegrp was running at the time.

Any ideas how I can speed this up?

-Brian

To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems with this
list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

To sign off this list, send email to listserv AT listserv.temple DOT edu and type 
"signoff networker" in the body of the email. Please write to networker-request 
AT listserv.temple DOT edu if you have any problems with this list. You can access the 
archives at http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>