nv-l

RE: [nv-l] NV tuning for Data collection

2005-01-13 09:58:34
Subject: RE: [nv-l] NV tuning for Data collection
From: "Liu, David" <david.liu AT eds DOT com>
To: "'nv-l AT lists.us.ibm DOT com'" <nv-l AT lists.us.ibm DOT com>
Date: Thu, 13 Jan 2005 14:55:57 -0000
James,
 
Thanks for your advice. I'll check the priority of the Cisco devices.
 
The two MIBs I got error messages are SNMP2 MIBs which I cannot browse
through the MIB browser.
 
Regards,
David

-----Original Message-----
From: owner-nv-l AT lists.us.ibm DOT com [mailto:owner-nv-l AT lists.us.ibm DOT 
com]On
Behalf Of James Shanks
Sent: Thursday, January 13, 2005 3:47 PM
To: nv-l AT lists.us.ibm DOT com
Subject: RE: [nv-l] NV tuning for Data collection



You need to check the MIB you have loaded for this. snmpCollect "expects" to
get back what's in the SNMPv1 MIB database. Start the MIB browser xnmbrowser
and follow down to the appropriate entries and click the Describe button to
see what data type that is.

It is not likely that this has anything to do with the delays. It is far
more likely that your Cisco devices are configured to give a low priority to
SNMP requests when they are busy


James Shanks
Level 3 Support for Tivoli NetView for UNIX and Windows
Tivoli Software / IBM Software Group
Inactive hide details for "Liu, David" ' src="cid:455105314@13012005-1005"
width=16>"Liu, David" <david.liu AT eds DOT com>






        "Liu, David" <david.liu AT eds DOT com> 
Sent by: owner-nv-l AT lists.us.ibm DOT com 

        01/13/2005 09:17 AM 

        

        Please respond to
nv-l




To

"'nv-l AT lists.us.ibm DOT com'" <nv-l AT lists.us.ibm DOT com> 



cc

        



Subject

RE: [nv-l] NV tuning for Data collection        
                

Joe and Leslie,

Thanks for your advice. I'll do some further investigation and testing,
especially to avoid collecting those "non-reply" and "non-response" objects.


One other "problem" appearing in the trace file:

"MIB cpmCPUTotal5min, (and ciscoMemoryPoolFree) on 'hostname' gave type
Gauge, expect TIMESTICKS (due to mib.coerce file?)"

1) I did not specify anything in the coerce file and checked at Cisco site
that CPU and Memory should be type Gauge (32). Why they expect TIMESTICKS?

2) Does this error have impact on the quality of data collection (e.g. delay
etc.)?

Regards,
David 

        -----Original Message-----
From: owner-nv-l AT lists.us.ibm DOT com [ mailto:owner-nv-l AT lists.us.ibm 
DOT com
<mailto:owner-nv-l AT lists.us.ibm DOT com> ]On Behalf Of Leslie Clark
Sent: Wednesday, January 12, 2005 11:32 PM
To: nv-l AT lists.us.ibm DOT com
Subject: Re: [nv-l] NV tuning for Data collection


Startup of snmpCollect may be slow. Turn on the tracing and watch what it is
doing at startup, so you know whether to worry or not. 

I've found snmpCollect to be pretty efficient at collecting, and at
minimizing its impact on the devices by grouping stuff together. Where you
may have trouble is with those per-interface values on devices with lots of
interfaces. There is a parameter for the snmpCollect daemon for the maxpdu
that controls how much data it will request at once. It defaults to 100
somethings. I've sometimes changed it to 50 somethings, so snmpCollect
breaks the request into smaller packages. This avoids loss of data caused by
the device refusing to deliver too-large responses. 

There is also, under the Options....SNMP Config, the timeout and retries
settings. I know this applies to snmp requests from other parts of Netview,
but I have never been sure whether it applied to snmpCollect or not. 

You will also have trouble with some of those interface counters if the
interfaces are very high speed. Netview currently will only collect 32bit
values (Counter32), and for gig interfaces, the values wrap much too
quickly. So take a look at which instances you really need, and what the
rate of flow really is. There is no point collecting it if it is bad data.
Look for sub-interface instances with lower rates of flow and see if that
will give you what you need. 

When you update snmpCollect configuration via the gui, it updates
/usr/OV/conf/snmpCol.conf and stop/starts snmpCollect. You can edit that
file manually. If you find that you want to collect a variety of different
interface instances on each device, you could generate that file
programmatically. I'm suggesting that with large numbers of devices,
entering *.*.*.* or Routers just because it is easy is not always the best
choice. Try making fancy entries via the gui and then check the results in
snmpCol.conf. Then write a little script to generate the repetitive parts. 

Cordially,

Leslie A. Clark
IBM Global Services - Systems Mgmt & Networking



        "Liu, David" <david.liu AT eds DOT com> 
Sent by: owner-nv-l AT lists.us.ibm DOT com 


        01/11/2005 03:53 AM 



        

        Please respond to
nv-l

        
        

        To
        "'nv-l AT lists.tivoli DOT com'" <nv-l AT lists.tivoli DOT com>         
        

        cc
                
        

        Subject
        [nv-l] NV tuning for Data collection    

                        
        


Hi list,

I've been reading nv-l archives for quite some times and benefit from them.
Now I post my first question to get your advice/help.

We are collecting quite a lot data every 15 mins (supposed to), but till now
only part of the collection happened (everyday less than half i.e. 40
collections per definition per device, sometimes even null). My basic
question is: can NV handle that many collections? Because when I suspended
more than half of the collections (interface data). It seemed working fine.
If yes, how can I tune the NV? Where can I find the document for the tuning
(snmpCollect daemon settings)?

Here's some basic info.

NV 7.1.2 on Solaris 2.8.

Data collection of about 600 devices with

1) SysUptime for all of them

2) cpmCPUTotal5min for 400 devices (cisco)

3) ciscoMemoryPoolUsed for 400 devices

4) ciscoMemoryPoolFree for 400 devices

5) ifInUcastPkts, ifOutUcastPkts, ifInErrors, ifOutErrors, ifAdminStatus,
ifOperStatus, ifLastChange, ifInOctets, ifOutOctets, ifInDiscards,
ifOutDiscards, ifInNUcastPkts, ifOutNUcastPkts for about 200 devices with
lot of interfaces

6) Some latency data for 10 routers

Our current snmpCollect (daemon) settings:

Defer time:                                                     60
Max PDU:
50
Config check interval:                                  1440 
Max concurrent SNMP sessions: 50
Verbose trace mode :                                   Yes
Polling interval for nvcold:                 60

Thank you in advance.

Regards,
David





Attachment: graycol.gif
Description: GIF image

Attachment: ecblank.gif
Description: GIF image