In my scenario forecasting does not bring much and spike usage analysis brings me much better results.
Therefore, in my triggers for disk monitoring I use the delta function (aka. Max-Min)
I look 2 weeks backwards and if the delta is bigger than my current free space, I know I am at risk, since I do not have enough room for a spike of that size to happen.
Since looking backwards 2 weeks is an expensive operation, I moved it from the trigger (which is probably calculated too often) to a calculated item, where I can define how often it should be calculated. I do it for the moment once a day (for hundreds of disks).
My calculated item prototype (for LLD) key is:
and the formula
The Trigger prototype contains:
Or in English, if my current free space is smaller than my recent spikes, trigger a notification.
This works pretty well!
But is has a small problem, delta is not really the spikes, it is Max-Min, unaware of direction.
A better solution would be to detect low peaks and compare them only with yet older High peaks
ie:
If I have a 100GB disk, where the free space does peaks such as
40GB Free -> 10GB Free (30GB usage spike)
10GB Free -> 60GB Free (50GB delta due to disk enlargement)
60BG Free -> 40GB Free (20GB usage spike)
With my current implementation (delta) the trigger will be true because
40GB < 50GB
when in fact i need
40GB not < 30GB (my real largest down spike)
Any idea how to achieve such calculation in Zabbix?
Therefore, in my triggers for disk monitoring I use the delta function (aka. Max-Min)
I look 2 weeks backwards and if the delta is bigger than my current free space, I know I am at risk, since I do not have enough room for a spike of that size to happen.
Since looking backwards 2 weeks is an expensive operation, I moved it from the trigger (which is probably calculated too often) to a calculated item, where I can define how often it should be calculated. I do it for the moment once a day (for hundreds of disks).
My calculated item prototype (for LLD) key is:
Code:
free.space.spikessize[{#FSNAME}]
Code:
delta("vfs.fs.size[{#FSNAME}, free]",2w)
Code:
{vfs.fs.size[{#FSNAME}, free].last()} < {free.space.spikessize[{#FSNAME}].last()}
This works pretty well!
But is has a small problem, delta is not really the spikes, it is Max-Min, unaware of direction.
A better solution would be to detect low peaks and compare them only with yet older High peaks
ie:
If I have a 100GB disk, where the free space does peaks such as
40GB Free -> 10GB Free (30GB usage spike)
10GB Free -> 60GB Free (50GB delta due to disk enlargement)
60BG Free -> 40GB Free (20GB usage spike)
With my current implementation (delta) the trigger will be true because
40GB < 50GB
when in fact i need
40GB not < 30GB (my real largest down spike)
Any idea how to achieve such calculation in Zabbix?