I have a device which I just started monitoring using snmp. One of its OIDs gives you min, avg, and max values of the percentage of processor usage during the past 5 seconds. I collect the data every 30 seconds and created a stacked graph that displays the max value of all 12 processors. History storage for the item is set at 90 days. Trend storage at 365.
This morning, I looked at the graph and saw several 'peaks' close to 300. As I didn't have much time, I decided to review the data collected during office hours afterwards. But now, looking back at the same graph some 15 hours later, the top 'peak' is only c. 130.
Why are these values averaging out so fast, and how can I prevent this?
I really need to be able to display the actual values. We are experiencing performance issues, and without proper values, I cannot prove they are related to the way this device functions.
This morning, I looked at the graph and saw several 'peaks' close to 300. As I didn't have much time, I decided to review the data collected during office hours afterwards. But now, looking back at the same graph some 15 hours later, the top 'peak' is only c. 130.
Why are these values averaging out so fast, and how can I prevent this?
I really need to be able to display the actual values. We are experiencing performance issues, and without proper values, I cannot prove they are related to the way this device functions.