nv-l

Re: [NV-L] Netview Status Request

2006-09-26 13:59:41
Subject: Re: [NV-L] Netview Status Request
From: "Marcelo Zacchi" <mzacchi AT gmail DOT com>
To: "Tivoli NetView Discussions" <nv-l AT lists.ca.ibm DOT com>
Date: Tue, 26 Sep 2006 14:45:16 -0300
Leslie,
 
Apparently it worked! Thanks again!
The only problem now, though, is that in every polling sequence when Netview is updating the map, the process netview.exe uses 100% of the processor, and that causes a lot of instability.
Do you know if there is any connection between this and the nsync?
 
Best regards,
Marcelo

 
On 9/26/06, Marcelo Zacchi <mzacchi AT gmail DOT com> wrote:
Leslie.
 
Thanks for your replay. I have changed the nvsync_timeout value in the netview.rls and am now waiting to see if it helps.
Since the netview.rls is the first ruleset to run, I don't have to change anything else.
 
Best regards,
Marcelo

 
On 9/26/06, Leslie Clark <lclark AT us.ibm DOT com> wrote:

The timeouts and retries are set in xnmsnmpconf (Options..SNMP). The defaults are 2 seconds, 3 retries. This is an adjustment that you should increase in very small increments.

I can recommend the netview.rls rule on the TEC. It gives you another later of timeout. TEC will clear any down events for which an up event arrives within a reasonable time. If you run that rule before you run your ticketing rule, then false alarms appear but usually get cleared out before the ticket gets made.

In the netview.rls on the TEC there is a parameter that you usually need to increase.
% nvsync_timeout
                % This attribute sets the period in seconds that we must
                % wait to distinguish between the synchronization of
                % single or multiple events. Default timeout is 30 seconds.
                rerecord(nvsync_timeout, 30),

I suggest that you set this to one polling period plus a couple of seconds.

Cordially,

Leslie A. Clark
IT Services Specialist, Network Mgmt
Information Technology Services Americas
IBM Global Services
(248) 552-4968 Voicemail, Fax, Pager



"Marcelo Zacchi" <mzacchi AT gmail DOT com>
Sent by: nv-l-bounces AT lists.ca.ibm DOT com

09/26/2006 06:51 AM
Please respond to
Tivoli NetView Discussions <nv-l AT lists.ca.ibm DOT com >

To
nv-l AT lists.ca.ibm DOT com
cc
Subject
[NV-L] Netview Status Request





Dear list members,
 
I've been having a lot of stress in my environment due to oscilations in object status in Netview.
We have created an adapter to open up TroubleTickets for each NODE DOWN event that arrives at TEC and Netview is generating lots of such event every day!
I have tried to mess around the timeout values on both ICMP and SNMP configurations but nothing changes and the map keeps going crazy.
 
I've noticed today, in the netmon.trace file, that it seems that netmon is only attempting to poll the object once:
 
26/09/2006 07:14:20: expired ping to 22.22.22.254 (R-DMZNEG) seqnum = 14548 ident = 13756
26/09/2006 07:14:20:!!! timing out iface
22.22.22.254 with seqnum=14548
26/09/2006 07:14:20:reachabilityAnalysisExpiredPing: interface
22.22.22.254, (was Normal) subnet 22.22.1, mode Disabled
26/09/2006 07:14:20:DOWN event:
22.22.22.254 (R-DMZNEG)  
 
Anyone know how can I actually change the number of times Netview tries to check for an object status?
 
Thanks in advance,
Marcelo Zacchi_______________________________________________
NV-L mailing list
NV-L AT lists.ca.ibm DOT com
Unsubscribe:NV-L-leave AT lists.ca.ibm DOT com
http://lists.ca.ibm.com/mailman/listinfo/nv-l (Browser access limited to internal IBM'ers only)


_______________________________________________
NV-L mailing list
NV-L AT lists.ca.ibm DOT com
Unsubscribe:NV-L-leave AT lists.ca.ibm DOT com
http://lists.ca.ibm.com/mailman/listinfo/nv-l (Browser access limited to internal IBM'ers only)




_______________________________________________
NV-L mailing list
NV-L AT lists.ca.ibm DOT com
Unsubscribe:NV-L-leave AT lists.ca.ibm DOT com
http://lists.ca.ibm.com/mailman/listinfo/nv-l (Browser access limited to 
internal IBM'ers only)
<Prev in Thread] Current Thread [Next in Thread>