Networker

Re: [Networker] parallelism of storage nodes

2007-08-15 07:05:02
Subject: Re: [Networker] parallelism of storage nodes
From: Davina Treiber <Davina.Treiber AT PEEVRO.CO DOT UK>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Wed, 15 Aug 2007 12:01:25 +0100
mark wragge wrote:
I am a little confused now as to how i can control how much work my storage nodes can do at one time. I have a library with 10 tape devices shared between my networker server and two storage nodes. Is the parallelism setting in the networker server properties controlling the load of work for all servers sharing the library? The paralellism setting here is currently 40 (4 streams per tape device). The parallelism setting on the sotrage node clients is 4. Does this mean that the storage node can only write 4 streams even though it has access to 10 tape devices? If I direct 10 clients with paralleilism of 4 to this storage node then how many streams will run at one time during the backup will it be 40 or just 4?

40. The setting of 4 is the client parallelism for sessions coming from the storage node machine - not for sessions coming from other clients via the storage node.

There is another parallelism setting that can be useful - this is the group parallelism setting. You could feasibly put all clients using a storage node in one group and control the parallelism by setting it on that group.

I have often thought that to complete the set EMC should introduce a parallelism setting for jukeboxes and storage nodes. The existing paralellism setting for jukeboxes is something completely different, controlling tape movements rather than sessions.

To sign off this list, send email to listserv AT listserv.temple DOT edu and type 
"signoff networker" in the body of the email. Please write to networker-request 
AT listserv.temple DOT edu if you have any problems with this list. You can access the 
archives at http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER

<Prev in Thread] Current Thread [Next in Thread>