We have nodata triggers on a device uptime item (5m interval) for each SNMP device we monitor. Recently, a handful of of these triggers have started firing and closer inspection reveals that these items are failing to get data - sometimes for extended periods of time. See graph attached/below.

I can snmpget this OID from the CLI (or even snmpwalk the target host) during one of these 'failure' periods without issue.
Pollers set to 150, and stay 10-30% utilised typically, peaking around 50% infrequently.
Network utilisation on NIC is negligible.
Network connectivity of devices is excellent (gigabit [or more] everywhere).
Any ideas?
We have a single-box deployment on a dual socket HP DL360 Gen9 Server with all-SSD and 128GB of RAM.
* CentOS 7 (up to date).
* Zabbix 3.4.15
* net-snmp 5.7.2-28
I can snmpget this OID from the CLI (or even snmpwalk the target host) during one of these 'failure' periods without issue.
Pollers set to 150, and stay 10-30% utilised typically, peaking around 50% infrequently.
Network utilisation on NIC is negligible.
Network connectivity of devices is excellent (gigabit [or more] everywhere).
Any ideas?
We have a single-box deployment on a dual socket HP DL360 Gen9 Server with all-SSD and 128GB of RAM.
* CentOS 7 (up to date).
* Zabbix 3.4.15
* net-snmp 5.7.2-28
Comment