We're just getting started on our initial Zabbix deployment. We're using this template which includes event log based triggers: https://share.zabbix.com/operating-s...and-perfomance
It's collecting data, but the problem is it's creating problems for ancient event log entries and those aren't resolving. Example trigger from template:
logseverity(/servername/eventlog[DNS Server,,"Warning|Error"]) >1 and nodata{/servername/eventlog[DNS Server,,"Warning|Error"],1800s)=0
If I'm understanding things correctly, that says the trigger should be resolved if there's no Warning or Error events in the DNS Server log for 30 minutes. We've got a server showing an active problem based on this trigger for over 24 hours now and at this moment the most recent matching event in the log has a local timestamp of August 2020 but Zabbix timestamp of 2 minutes ago. So I'm struggling to understand what exactly is happening here. Is it just parsing the event log exceedingly slowly to the point of taking 24+ hours to parse the log? If so, what all controls the rate is can read through the log?
It's collecting data, but the problem is it's creating problems for ancient event log entries and those aren't resolving. Example trigger from template:
logseverity(/servername/eventlog[DNS Server,,"Warning|Error"]) >1 and nodata{/servername/eventlog[DNS Server,,"Warning|Error"],1800s)=0
If I'm understanding things correctly, that says the trigger should be resolved if there's no Warning or Error events in the DNS Server log for 30 minutes. We've got a server showing an active problem based on this trigger for over 24 hours now and at this moment the most recent matching event in the log has a local timestamp of August 2020 but Zabbix timestamp of 2 minutes ago. So I'm struggling to understand what exactly is happening here. Is it just parsing the event log exceedingly slowly to the point of taking 24+ hours to parse the log? If so, what all controls the rate is can read through the log?