I have not had this particular one happen, but something similar.
It seems that
if you have another device available for the pool, even if
that device is offline,
and you have used up the target sessions for all other devices
currently in use for that pool
then NetWorker queues save sets for a volume, which it expects
would be mounted on a new device.
This would mean explain your situation (and some others).
Our solution was to only put the devices that were really
available in the "devices" attribute of each pool's resource.
In your case you could also just change the "group parallelism" to 4.
On 21/07/2006, at 7:31 AM, George Sinclair wrote:
Hi,
Why is it that if the total number of save sets for a group exceeds
the
value of the drive target sessions (in our case 4), and you only have
one writable tape, NetWorker will wait until the first 4 save sets
have
completed before sending the next pending ones to tape?
NetWorker does ask for a second writable tape, and yes, if I had one
available, it would mount it, and then it would be writing 4 save sets
to drive 1 and 4 to drive 2, but I don't understand why it can't start
sending save sets that were previously pending to drive 1 as soon
as one
of the running save sets completes. Instead, it seems to want to wait
until all 4 are complete, it then pauses for a minute and then
continues
with the next 4, and so on. Is this normal behavior? It would be
nice if
it could send stuff there as soon as something frees up and not wait.
We have 7.2.2 running on a Solaris test server, using a Linux storage
node. The tape library is on the storage node. It has 4 SDLT 600
drives,
each set to 4 target sessions, but I currently have only one writable
tape. The parallelism for the clients is set to default of 4, and the
server's parallelism is 20. I have a group of 8 clients and a total of
10 save sets for this test.
Thanks.
George
To sign off this list, send email to listserv AT listserv.temple DOT edu
and type "signoff networker" in the
body of the email. Please write to networker-
request AT listserv.temple DOT edu if you have any problems
wit this list. You can access the archives at http://
listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
To sign off this list, send email to listserv AT listserv.temple DOT edu and type
"signoff networker" in the
body of the email. Please write to networker-request AT listserv.temple DOT edu
if you have any problems
wit this list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER
|