Networker

Re: [Networker] "Parallelism" setting if client is part of two groups

2004-11-05 16:27:17
Subject: Re: [Networker] "Parallelism" setting if client is part of two groups
From: "Ballinger, John M" <john.ballinger AT PNL DOT GOV>
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Date: Fri, 5 Nov 2004 12:25:03 -0800
Remember in the documentation that the saveset # for devices is simply a 
suggestion.
So if server sessions is set to 16 and you have 2 drives then 1st 4 savesets go 
to drive1, 2nd 4 savesets go to 2nd drive, 3rd 3 savesets go to 1st drive(now 
has 8) and 4th 3 savesets go to drive2 (now at 8).
The drive settings are only suggestions to NetWorker and not hard limits etc.
The system works as designed and the server doesn't override devices - rather 
the device settings are simply suggestions. (if you only had 3 savesets to do 
NetWorker would put all three to the first tape-drive and not even use the 2nd 
tape-drive)

thanks - John

-----Original Message-----
From: Legato NetWorker discussion
[mailto:NETWORKER AT LISTMAIL.TEMPLE DOT EDU]On Behalf Of Riaan Louwrens
Sent: Thursday, November 04, 2004 6:14 AM
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Subject: Re: [Networker] "Parallelism" setting if client is part of two
groups


As a matter of interest. Server // overides devices // - hence you might see a 
significantly more number of savesets streaming to your devices. 

(I have seen devices with 4 receiving 16 - as this is what the server was set 
to). I am not sure if this behaviour has been set right in version 7.x.

Regards,
Riaan

-----Original Message-----
From: Legato NetWorker discussion
[mailto:NETWORKER AT LISTMAIL.TEMPLE DOT EDU]On Behalf Of Itzik Meirson
Sent: Thursday, November 04, 2004 12:02 PM
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Subject: Re: [Networker] "Parallelism" setting if client is part of two
groups


I would second Robert's observation. 
Saveset SPAWNING is done on the savegroup basis.
So each "savegrp" will (potentially) spawn from each client UPTO the
"client parallelism" number of savesets.
SPAWNING does not mean all will be running (allocated a device) as the
total number of concurrently running savesets is governed by the "server
parallelism".
If you run N groups in parallel and all groups have at least "server
parallelism" savesets in their work list, the total number of spawn
savesets will be N*"server parallelism" but only 1*"server parallelism"
will be running concurrently. The rest of the savesets will be waiting
for device allocation. During the wait for device allocation the
"inactivity timeout" will be ticking!!!
Itzik

-----Original Message-----
From: Legato NetWorker discussion [mailto:NETWORKER AT LISTMAIL.TEMPLE DOT EDU]
On Behalf Of Robert Maiello
Sent: Thursday, November 04, 2004 04:00
To: NETWORKER AT LISTMAIL.TEMPLE DOT EDU
Subject: Re: [Networker] "Parallelism" setting if client is part of two
groups

I'm going to say 8 as that's what I've seen.  I would imagine it must be
less efficient than running 1 group with a client parallelism of 8?


Robert Maiello
Pioneer Data Systems


On Tue, 2 Nov 2004 05:30:11 -0500, Thomas Staudenmaier
<thomas.staudenmaier AT ZKD.BWL DOT DE> wrote:

>Hello,
>I have a client that is part of two groups (G1, G2). In each group the 
>client resource "parallelism" is set to 4.
>When G1 and G2 run at the same time, how many saves can run in parallel

>on the client, 4 or 8 ?
>
>Thank for your help.
>Thomas
>
>--
>Note: To sign off this list, send a "signoff networker" command via 
>email to listserv AT listmail.temple DOT edu or visit the list's Web site at 
>http://listmail.temple.edu/archives/networker.html where you can also 
>view and post messages to the list. Questions regarding this list 
>should be sent to stan AT temple DOT edu 
>=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

--
Note: To sign off this list, send a "signoff networker" command via
email to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can also
view and post messages to the list. Questions regarding this list should
be sent to stan AT temple DOT edu
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

**************************************************************************************************
The contents of this email and any attachments are confidential.
It is intended for the named recipient(s) only.
If you have received this email in error please notify the system manager or  
the 
sender immediately and do not disclose the contents to any one or make copies.

MBI - System Team
**************************************************************************************************

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list. Questions regarding this list
should be sent to stan AT temple DOT edu
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.788 / Virus Database: 533 - Release Date: 11/1/04
 

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.788 / Virus Database: 533 - Release Date: 11/1/04
 

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list. Questions regarding this list
should be sent to stan AT temple DOT edu
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

--
Note: To sign off this list, send a "signoff networker" command via email
to listserv AT listmail.temple DOT edu or visit the list's Web site at
http://listmail.temple.edu/archives/networker.html where you can
also view and post messages to the list. Questions regarding this list
should be sent to stan AT temple DOT edu
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

<Prev in Thread] Current Thread [Next in Thread>