I have been wondering about the cost, in processing load for the server, of various triggers.
I think simple triggers are fairly cheap, item > constant, the server is using an item it just received, no DB access needed.
A little more complicate item.min(20m) would should use the value cache, cost would depend on the update interval, 60 sec would be the avg of 20 items, usually.
More expensive, item.min(20m) > item.avg(20m,1w)*1.25. Now I'm fetching a 20 minute interval of week old data. If I do this for 500 servers on an item updated every 60 seconds, how expensive is it?
I got a request to trigger on io wait on Linux servers, apparently IO wait caused some application outage, I don't have the details. But a simple test of > 10%, (for example) will often trigger when something like a backup process runs. I think looking at last weeks average for the same period and only triggering if we are busier than before might reduce the number of false alarms.
Thanks
I think simple triggers are fairly cheap, item > constant, the server is using an item it just received, no DB access needed.
A little more complicate item.min(20m) would should use the value cache, cost would depend on the update interval, 60 sec would be the avg of 20 items, usually.
More expensive, item.min(20m) > item.avg(20m,1w)*1.25. Now I'm fetching a 20 minute interval of week old data. If I do this for 500 servers on an item updated every 60 seconds, how expensive is it?
I got a request to trigger on io wait on Linux servers, apparently IO wait caused some application outage, I don't have the details. But a simple test of > 10%, (for example) will often trigger when something like a backup process runs. I think looking at last weeks average for the same period and only triggering if we are busier than before might reduce the number of false alarms.
Thanks