Networker

Re: [Networker] Waiting for 1 writable volume ...

2011-07-12 04:09:53
Subject: Re: [Networker] Waiting for 1 writable volume ...
From: Manel Rodero Blanquez <manel AT FIB.UPC DOT EDU>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Tue, 12 Jul 2011 10:08:57 +0200
Hi George,

El 08/07/2011 19:19, George Sinclair escribió:
On 07/08/11 10:11, Goslin, Paul wrote:
I would increase the drive target sessions to at least 6 as 4 seems
awfully low. That may be enough to stop the requests for additional
volumes... It sounds like it's attempting to run more concurrent
streams than your drives are configured for now (only 8), so it
wants/expects another drive to handle the load.

Since you have no Group parallelism set , the parallelism setting of
each client in the group factors into the total concurrent sessions a
group will attempt to stream to your drives... I noticed when we
upgraded once, all the client parallelism changed from a default of 4
to like something like 12, and I had to go and set them back to 4 or
less (depending on the client, some are old Alpha/VMS client that
can't handle more than 1 or 2 concurrent sessions).

Or you could set the group parallelism to 8 or less (to match the
current drive settings of 4 times 2 drives) and see if that stops the
additional volume requests...

I bet that's exactly what's happening. I've seen it many times. The
drive parallelism is not an absolute high water mark. NW can and will
increase that if there are not enough available drives, and/or tapes, to
handle the save streams that are trying to send data.

The one thing that does control or limit this is the client parallelism
and possibly the group parallelism. Not sure about pool parallelism. And
obviously the server parallelism is the main starting point. I don't
think you can exceed that but can't recall how the number of snodes
plays into that.


So, right now, I have server parallelism set to 16. Drives has been increased from 4 to 6 as Goslin told me. Group parallelism is set to 0 and all clients have parallelism set to 4 (well, some clients has it set to 1 because they are only a virtual name for a cluster share, and have only 1 saveset defined).

I hope this configuration is good as I have no free time now for reading again the admin guide ;-)

Thank you very much for your help.

Too bad they don't allow separate parallelism values for different
client resources for the same client. As soon as you change it for one
instance, it changes it for all of those.

George


-----Original Message-----
From: EMC NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU]
On Behalf Of Manel Rodero Blanquez
Sent: Friday, July 08, 2011 8:31 AM
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Subject: Re: [Networker] Waiting for 1 writable volume ...

Hi Goslin,

1 - Total Server parallelism is 16
2 - We have 2 LTO4 drives with target sessions = 4
3 - Group parallellism is 0

This version seems to have a little bit strange behaviour...

Any more ideas about what to check for?

Thanks.

El 07/07/2011 19:49, Goslin, Paul escribió:
It could be couple of things...

Look at
1. Total server parallelism setting (ours is set to 64 w 4 LTO-4
Drives, 8 target sessions per drive).
2. Target sessions per drive (the upgrade may have caused this # to
be reduced, so it thinks it needs more drives to accommodate the
total concurrent sessions it is attempting to run).
3. Parallelism setting of the running groups.... (do they add up to
more than #2 above ??)


-----Original Message-----
From: EMC NetWorker discussion [mailto:NETWORKER AT LISTSERV.TEMPLE DOT EDU]
On Behalf Of Manel Rodero Blanquez
Sent: Thursday, July 07, 2011 9:22 AM
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Subject: [Networker] Waiting for 1 writable volume ...

Hello,

Recently, our server is asking always for 1 writable volume when it has
all 2 tapes drives with a tape inside and writing to them.

At the beginning of the backup it says "writing for 3 writable volumes"
and after mounting 2 tapes in the 2 drives, the message for 1 extra
volume is in the alert window.

Is this a new behaviour in the 7.6.1 versin? (we've recently upgraded
from 7.5.3 where we haven't seen this message during the backups, only
at the beginning while Legato mounts the tapes needed).

Thank you.





--

       Manel Rodero Blánquez
o o o  IT Systems Manager
o o o  Laboratori de Càlcul
o o o  Facultat d'Informàtica de Barcelona
U P C  Universitat Politècnica de Catalunya - Barcelona Tech

       E-mail : manel AT fib.upc DOT edu
       Tel.   : +34 93 401 0847
       Web    : http://www.fib.upc.edu/

======================================================================

Abans  d'imprimir aquest missatge, si us plau, assegureu-vos que sigui
necessari. El medi ambient és cosa de tots.

--[ http://www.fib.upc.edu/disclaimer/ ]------------------------------

ADVERTIMENT  /  TEXT  LEGAL:  Aquest  missatge pot contenir informació
confidencial  o  legalment protegida i està exclusivament adreçat a la
persona  o entitat destinatària. Si vosté no es el destinatari final o
persona  encarregada  de  recollir-lo, no està autoritzat a llegir-lo,
retenir-lo, modificar-lo, distribuir-lo, copiar-lo ni a revelar el seu
contingut.  Si ha rebut aquest correu electrònic per error, li preguem
que  informi  al  remitent  i elimini del seu sistema el missatge i el
material annex que pugui contenir. Gràcies per la seva col·laboració.

To sign off this list, send email to listserv AT listserv.temple DOT edu and type 
"signoff networker" in the body of the email. Please write to networker-request 
AT listserv.temple DOT edu if you have any problems with this list. You can access the 
archives at http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER