We have an issue where we get flooded with alerts.
host is a netscaler with snmp items and triggers to monitor down hosts in load balancer queues.
set to alert on a trigger (host down >0)
escalations enabled
no recovery message
operation is to send email steps 1 - 0 every 300 seconds
it will work fine, then last night (and a couple other times) we had a release where we shutdown several hosts then brought them back up generating a couple dozen problem/ok alert pairs. We re-enabled the email for the alert.
At this point every event (problem or ok) for the last about 24 hours started alerting every 5 minutes. This continued until I went through and ack'd each event. Even after the problem events were ack'd it continued to alert on the ok events until they were ack'd also.
What can we do to stop this? We don't want Zabbix alerting us if a trigger is no longer in a problem state.
host is a netscaler with snmp items and triggers to monitor down hosts in load balancer queues.
set to alert on a trigger (host down >0)
escalations enabled
no recovery message
operation is to send email steps 1 - 0 every 300 seconds
it will work fine, then last night (and a couple other times) we had a release where we shutdown several hosts then brought them back up generating a couple dozen problem/ok alert pairs. We re-enabled the email for the alert.
At this point every event (problem or ok) for the last about 24 hours started alerting every 5 minutes. This continued until I went through and ack'd each event. Even after the problem events were ack'd it continued to alert on the ok events until they were ack'd also.
What can we do to stop this? We don't want Zabbix alerting us if a trigger is no longer in a problem state.

Comment