We have a handful of Nexus devices that we monitor. Lately we have been getting so strange false alerts on the interface traffic capacity. From what I can tell there is no reason they alerts should be going off. Here is the trigger definition:
{Template SNMP Cisco Nexus:ifSpeed[{#IFDESCR}].last()}>0 and
{Template SNMP Cisco Nexus:ifHCInOctets[{#IFDESCR}].avg(60m)}>{Template SNMP Cisco Nexus:ifSpeed[{#IFDESCR}].last()}*.95
We are basically looking for alerts when the traffic exceeds 95% of the interface capacity over a 60m period. The problem is, based off of the history we are seeing, the traffic on some of these devices never even exceeds 30% capacity let alone 95%. Is there something wrong with this trigger? For context, the ifSpeed value is 10Gbps and most of the traffic we are seeing hovers in the 10Mbps to 2Gbps range. I should add, that these alerts tend to come out when traffic spikes and then resolve after 60 minutes. So i'm just wondering if I have things reversed and I'm not seeing it.
{Template SNMP Cisco Nexus:ifSpeed[{#IFDESCR}].last()}>0 and
{Template SNMP Cisco Nexus:ifHCInOctets[{#IFDESCR}].avg(60m)}>{Template SNMP Cisco Nexus:ifSpeed[{#IFDESCR}].last()}*.95
We are basically looking for alerts when the traffic exceeds 95% of the interface capacity over a 60m period. The problem is, based off of the history we are seeing, the traffic on some of these devices never even exceeds 30% capacity let alone 95%. Is there something wrong with this trigger? For context, the ifSpeed value is 10Gbps and most of the traffic we are seeing hovers in the 10Mbps to 2Gbps range. I should add, that these alerts tend to come out when traffic spikes and then resolve after 60 minutes. So i'm just wondering if I have things reversed and I'm not seeing it.