Networker

Re: [Networker] Question on target sessions?

2006-07-20 20:28:45
Subject: Re: [Networker] Question on target sessions?
From: Tim Nicholson <tim AT MAIL.USYD.EDU DOT AU>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Fri, 21 Jul 2006 10:24:44 +1000
I have not played with it, but in later versions of NetWorker
there is also a "max sessions" attribute of a device.

The general idea is that NetWorker will try to use
"target sessions" of a device.  If they are all used,
then it will try to find another device and fill it.
If there are none left, it will overload the ones in
use up to "max sessions".

The group will try to start as many sessions as it is allowed
by the group parallelism, client parallelism, server parallelism
and max device sessions available.


If you do have 40 save sets to run and you parallelism allows
12 sessions, then you will get 12 sessions.  It will allocate
target sessions (4) to the first device, target sessions (4)
to the second device and target sessions (4) to the third
device.  If one of them is unavailable it will start putting
the last 4 sessions evenly over the first two devices.  If
two are missing it would put all 12 to the one device.

On 21/07/2006, at 9:47 AM, George Sinclair wrote:

Thanks, Tim. I played around with disabling the other drives and/or
simply un-selecting them from the given pool resource. Either of these
seems to accomplish the same thing wherein NetWorker will send how ever
may save sets it wants to the device (the one tape) and not pay
attention to the target sessions value of 4.

I did try changing the group parallelism to 4 with all the drives
enabled and all the devices selected for the pool, and this works
nicely. NetWorker maintains a steady stream of 4 save sets rather than
doing 4, completing, and then starting the next 4, etc. like it does if
I have the group parallelism set to the default of 0.

This brings up a question here. Let's suppose that I had say 10 clients
in a group, with anywhere from 3-4 save sets each, for a total of say
30-40 save sets. If I expect to have 3 tapes always available, and I
want to limit the target sessions to about 4 (I like to keep it at 4-5
max to decrease recover time) on the devices, then that's 3*4=12, so if
I make my group parallelism 12 then would that keep a stream of 4 per
device? What if I only had one tape available? Would it try to send 12
sessions to that one device?

I just want to keep a stream of x sessions going to every device that's available where x is the target sessions on the device (typically 4-5),
but I don't want NetWorker to exceed that too much if it only has one
drive available.

Thanks.

George


Tim Nicholson wrote:
I have not had this particular one happen, but something similar.

It seems that
    if you have another device available for the pool, even if
    that device is offline,
    and you have used up the target sessions for all other devices
        currently in use for that pool
    then NetWorker queues save sets for a volume, which it expects
    would be mounted on a new device.

This would mean explain your situation (and some others).

Our solution was to only put the devices that were really
available in the "devices" attribute of each pool's resource.

In your case you could also just change the "group parallelism" to 4.



On 21/07/2006, at 7:31 AM, George Sinclair wrote:

Hi,

Why is it that if the total number of save sets for a group exceeds the value of the drive target sessions (in our case 4), and you only have one writable tape, NetWorker will wait until the first 4 save sets have
completed before sending the next pending ones to tape?

NetWorker does ask for a second writable tape, and yes, if I had one
available, it would mount it, and then it would be writing 4 save sets to drive 1 and 4 to drive 2, but I don't understand why it can't start sending save sets that were previously pending to drive 1 as soon as one of the running save sets completes. Instead, it seems to want to wait until all 4 are complete, it then pauses for a minute and then continues with the next 4, and so on. Is this normal behavior? It would be nice if it could send stuff there as soon as something frees up and not wait.

We have 7.2.2 running on a Solaris test server, using a Linux storage node. The tape library is on the storage node. It has 4 SDLT 600 drives, each set to 4 target sessions, but I currently have only one writable tape. The parallelism for the clients is set to default of 4, and the server's parallelism is 20. I have a group of 8 clients and a total of
10 save sets for this test.

Thanks.

George

To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the
body of the email. Please write to
networker-request AT listserv.temple DOT edu if you have any problems
wit this list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

To sign off this list, send email to listserv AT listserv.temple DOT edu and
type "signoff networker" in the
body of the email. Please write to networker- request AT listserv.temple DOT edu
if you have any problems
wit this list. You can access the archives at
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER


To sign off this list, send email to listserv AT listserv.temple DOT edu and type "signoff networker" in the body of the email. Please write to networker- request AT listserv.temple DOT edu if you have any problems wit this list. You can access the archives at http:// listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

To sign off this list, send email to listserv AT listserv.temple DOT edu and type 
"signoff networker" in the
body of the email. Please write to networker-request AT listserv.temple DOT edu 
if you have any problems
wit this list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER