Networker

Re: [Networker] Waiting for 1 writable volume ...

2011-07-08 13:21:29
Subject: Re: [Networker] Waiting for 1 writable volume ...
From: George Sinclair <George.Sinclair AT NOAA DOT GOV>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Fri, 8 Jul 2011 13:19:30 -0400
On 07/08/11 10:11, Goslin, Paul wrote:
I would increase the drive target sessions to at least 6 as 4 seems awfully 
low. That may be enough to stop the requests for additional volumes... It 
sounds like it's attempting to run more concurrent streams than your drives are 
configured for now (only 8), so it wants/expects another drive to handle the 
load.

Since you have no Group parallelism set , the parallelism setting of each 
client in the group factors into the total concurrent sessions a group will 
attempt to stream to your drives... I noticed when we upgraded once, all the 
client parallelism changed from a default of 4 to like something like 12, and I 
had to go and set them back to 4 or less (depending on the client, some are old 
Alpha/VMS client that can't handle more than 1 or 2 concurrent sessions).

Or you could set the group parallelism to 8 or less (to match the current drive 
settings of 4 times 2 drives) and see if that stops the additional volume 
requests...

I bet that's exactly what's happening. I've seen it many times. The drive parallelism is not an absolute high water mark. NW can and will increase that if there are not enough available drives, and/or tapes, to handle the save streams that are trying to send data.

The one thing that does control or limit this is the client parallelism and possibly the group parallelism. Not sure about pool parallelism. And obviously the server parallelism is the main starting point. I don't think you can exceed that but can't recall how the number of snodes plays into that.

Too bad they don't allow separate parallelism values for different client resources for the same client. As soon as you change it for one instance, it changes it for all of those.

George


-----Original Message-----
From: EMC NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU] On 
Behalf Of Manel Rodero Blanquez
Sent: Friday, July 08, 2011 8:31 AM
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Subject: Re: [Networker] Waiting for 1 writable volume ...

Hi Goslin,

1 - Total Server parallelism is 16
2 - We have 2 LTO4 drives with target sessions = 4
3 - Group parallellism is 0

This version seems to have a little bit strange behaviour...

Any more ideas about what to check for?

Thanks.

El 07/07/2011 19:49, Goslin, Paul escribió:
It could be couple of things...

Look at
1. Total server parallelism setting (ours is set to 64 w 4 LTO-4 Drives, 8 
target sessions per drive).
2. Target sessions per drive (the upgrade may have caused this # to be reduced, 
so it thinks it needs more drives to accommodate the total concurrent sessions 
it is attempting to run).
3. Parallelism setting of the running groups.... (do they add up to more than 
#2 above ??)


-----Original Message-----
From: EMC NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU] On 
Behalf Of Manel Rodero Blanquez
Sent: Thursday, July 07, 2011 9:22 AM
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Subject: [Networker] Waiting for 1 writable volume ...

Hello,

Recently, our server is asking always for 1 writable volume when it has
all 2 tapes drives with a tape inside and writing to them.

At the beginning of the backup it says "writing for 3 writable volumes"
and after mounting 2 tapes in the 2 drives, the message for 1 extra
volume is in the alert window.

Is this a new behaviour in the 7.6.1 versin? (we've recently upgraded
from 7.5.3 where we haven't seen this message during the backups, only
at the beginning while Legato mounts the tapes needed).

Thank you.




--
George Sinclair
Voice: (301) 713-3284 x210
- The preceding message is personal and does not reflect any official or unofficial position of the United States Department of Commerce -
- Any opinions expressed in this message are NOT those of the US Govt. -

To sign off this list, send email to listserv AT listserv.temple DOT edu and type 
"signoff networker" in the body of the email. Please write to networker-request 
AT listserv.temple DOT edu if you have any problems with this list. You can access the 
archives at http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER