QNAP Discovery crash

Hi since weeks the Check_MK Discovery of a QNAP NAS crashes.
I’ve alse submit the crash report weeks ago
Also updating anything does not resolve it.
We have the Linux agent on the NAS and polling it also with SNMP.
All Sensors are working except this one.

QNAP TS-1273au-RP
Firmware: 4.5.4.1800
CheckMK Server+Agent: 2.0.0p11

Error:

IndexError (list index out of range)

Discovery crashes are most often related to arbitrary data.
Why are you monitoring the QNAP with both agent and SNMP?
Are you using any local checks or custom plugins?

Because I cannot monitor all stuff I want with only SNMP or Agent.
I use the default stuff no local checks or custom plugins

How does the error message looks like if you do a “cmk --debug -vvII QNAP”?

Hi that’s the output only the last lines including the error:

Write data to cache file /omd/sites/STAR/tmp/check_mk/data_source_cache/snmp/discovery/sys-nas01
Trying to acquire lock on /omd/sites/STAR/tmp/check_mk/data_source_cache/snmp/discovery/sys-nas01
Got lock on /omd/sites/STAR/tmp/check_mk/data_source_cache/snmp/discovery/sys-nas01
Releasing lock on /omd/sites/STAR/tmp/check_mk/data_source_cache/snmp/discovery/sys-nas01
Released lock on /omd/sites/STAR/tmp/check_mk/data_source_cache/snmp/discovery/sys-nas01
[cpu_tracking] Stop [7fb866254a60 - Snapshot(process=posix.times_result(user=0.1100000000000001, system=0.01999999999999999, children_user=0.0, children_system=0.0, elapsed=0.1600000001490116))]
  Source: SourceType.HOST/FetcherType.PIGGYBACK
[cpu_tracking] Start [7fb8662543d0]
No piggyback files for 'sys-nas01'. Skip processing.
No piggyback files for '192.168.253.40'. Skip processing.
[PiggybackFetcher] Fetch with cache settings: NoCache(base_path=PosixPath('/omd/sites/STAR/tmp/check_mk/data_source_cache/piggyback/sys-nas01'), max_age=MaxAge(checking=0, discovery=120, inventory=120), disabled=False, use_outdated=False, simulation=False)
[PiggybackFetcher] Execute data source
[cpu_tracking] Stop [7fb8662543d0 - Snapshot(process=posix.times_result(user=0.0, system=0.0, children_user=0.0, children_system=0.0, elapsed=0.0))]
+ PARSE FETCHER RESULTS
  Source: SourceType.HOST/FetcherType.PROGRAM
No persisted sections loaded
  -> Add sections: ['check_mk', 'cpu', 'df', 'diskstat', 'drbd', 'kernel', 'lnx_bonding', 'lnx_if', 'md', 'mem', 'mounts', 'multipath', 'postfix_mailq', 'ps_lnx', 'uptime', 'vbox_guest']
  Source: SourceType.MANAGEMENT/FetcherType.SNMP
No persisted sections loaded
  -> Add sections: ['dell_om_disks', 'dell_om_esmlog', 'dell_om_fans', 'dell_om_mem', 'dell_om_power', 'dell_om_processors', 'dell_om_sensors', 'hr_cpu', 'hr_fs', 'hr_ps', 'if', 'inv_if', 'qnap_disks', 'qnap_fans', 'qnap_hdd_temp', 'snmp_extended_info', 'snmp_info', 'snmp_os', 'snmp_uptime', 'ucd_cpu_load', 'ucd_diskio', 'ucd_mem']
  Source: SourceType.HOST/FetcherType.SNMP
No persisted sections loaded
  -> Add sections: ['dell_om_disks', 'dell_om_esmlog', 'dell_om_fans', 'dell_om_mem', 'dell_om_power', 'dell_om_processors', 'dell_om_sensors', 'hr_cpu', 'hr_fs', 'hr_ps', 'if', 'inv_if', 'qnap_disks', 'qnap_fans', 'qnap_hdd_temp', 'snmp_extended_info', 'snmp_info', 'snmp_os', 'snmp_uptime', 'ucd_cpu_load', 'ucd_diskio', 'ucd_mem']
  Source: SourceType.HOST/FetcherType.PIGGYBACK
No persisted sections loaded
  -> Add sections: []
Received no piggyback data
+ EXECUTING HOST LABEL DISCOVERY
Trying host label discovery with: cpu, df, kernel, lnx_if, uptime, check_mk, diskstat, drbd, inv_if, lnx_bonding, md, mem, mounts, multipath, postfix_mailq, snmp_extended_info, ucd_mem, vbox_guest
  cmk/os_family: linux (check_mk)
Trying to acquire lock on /omd/sites/STAR/var/check_mk/crashes/base/5e99f270-566a-11ec-8156-005056925c2b/crash.info
Got lock on /omd/sites/STAR/var/check_mk/crashes/base/5e99f270-566a-11ec-8156-005056925c2b/crash.info
Releasing lock on /omd/sites/STAR/var/check_mk/crashes/base/5e99f270-566a-11ec-8156-005056925c2b/crash.info
Released lock on /omd/sites/STAR/var/check_mk/crashes/base/5e99f270-566a-11ec-8156-005056925c2b/crash.info
Traceback (most recent call last):
  File "/omd/sites/STAR/bin/cmk", line 92, in <module>
    exit_status = modes.call(mode_name, mode_args, opts, args)
  File "/omd/sites/STAR/lib/python3/cmk/base/modes/__init__.py", line 69, in call
    return handler(*handler_args)
  File "/omd/sites/STAR/lib/python3/cmk/base/modes/check_mk.py", line 1542, in mode_discover
    discovery.do_discovery(
  File "/omd/sites/STAR/lib/python3/cmk/base/discovery.py", line 379, in do_discovery
    _do_discovery_for(
  File "/omd/sites/STAR/lib/python3/cmk/base/discovery.py", line 436, in _do_discovery_for
    discovered_services, host_label_discovery_result = _discover_host_labels_and_services(
  File "/omd/sites/STAR/lib/python3/cmk/base/discovery.py", line 1408, in _discover_host_labels_and_services
    discovered_host_labels = _discover_host_labels(
  File "/omd/sites/STAR/lib/python3/cmk/base/discovery.py", line 1227, in _discover_host_labels
    discovered_host_labels = _discover_host_labels_for_source_type(
  File "/omd/sites/STAR/lib/python3/cmk/base/discovery.py", line 1279, in _discover_host_labels_for_source_type
    for label in section_plugin.host_label_function(**kwargs):
  File "/omd/sites/STAR/lib/python3/cmk/base/api/agent_based/register/section_plugins.py", line 191, in filtered_generator
    for label in host_label_function(  # type: ignore[misc] # Bug: None not callable
  File "/omd/sites/STAR/lib/python3/cmk/base/plugins/agent_based/snmp_extended_info.py", line 42, in host_label_snmp_extended_info
    if device_type.name in section[0].entPhysDescr.upper():
IndexError: list index out of range

Did you submit the crash report? If not please do so. That helps us a lot in troubleshooting if the issue happens for several people.

Yes I’ve uploaded it some weeks ago

1 Like

I think i had a similar issue some weeks before. The problem is the “snmp_extended_info” section. This section cannot handle missing SNMP data. The only solution for such a device is to disable this SNMP section.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed. Contact an admin if you think this should be re-opened.