I can see where the built in zabbix vfs.dev.read would might work for disk0 but how to extend that variable to include 2, 4, 20, 395, etc...disks to check? Where does it get the number of disks on the system and which ar functional which are not? what factor does it use to determine disk is good or bad or marginal, or going bad? Thanks
Ad Widget
Collapse
Hard disk monitoring
Collapse
X
-
If you are referring to checking multiple disks within one read key, it's not directly possible. You would need to write a external script that iterates through the proc directory. The use of the sum aggregation function will not work as it only works across groups of hosts with the same key.
As for knowing when a drive is about to fail, that can be a tough one. Often folks say that if you get one sector error that means the drive is on it's way out, however with the error correction on most drives you don't know about a sector error often until late in the game. SMARTD can tell you some more information but I haven't had experience on seeing a failure.
I don't know if I posted my SMARTD recipe to the wiki, if not I'll post it when I get a chance.RHCE, author of zbxapi
Ansible, the missing piece (Zabconf 2017): https://www.youtube.com/watch?v=R5T9NidjjDE
Zabbix and SNMP on Linux (Zabconf 2015): https://www.youtube.com/watch?v=98PEHpLFVHM
-
Do you use SMART? Anyone find monitoring of SMART actually helps?
I've done some basic poking around with it, but haven't found it of much value. I know 3ware and RAIDZ keep track of SMART for you. I don't think linux software RAID does anything SMART related.Comment
-
So one thing that we have optimized as much as possible was a basic read from /proc. On RHEL3 it was in /proc/partitions and in RHEL5 it was in /proc/diskstats (I assume that thsi one likely works w/ RHEL4 as well)
Here is RHEL5
# Disk I/O Stats
UserParameter=custom.vfs.dev.read.ops[*],awk '/$1/{print $$5; exit}' /proc/partitions
UserParameter=custom.vfs.dev.read.sectors[*],awk '/$1/{print $$7; exit}' /proc/partitions
UserParameter=custom.vfs.dev.read.ms[*],awk '/$1/{print $$8; exit}' /proc/partitions
UserParameter=custom.vfs.dev.write.ops[*],awk '/$1/{print $$9; exit}' /proc/partitions
UserParameter=custom.vfs.dev.write.sectors[*],awk '/$1/{print $$11; exit}' /proc/partitions
UserParameter=custom.vfs.dev.write.ms[*],awk '/$1/{print $$12; exit}' /proc/partitions
UserParameter=custom.vfs.dev.io.active[*],awk '/$1/{print $$13; exit}' /proc/partitons
UserParameter=custom.vfs.dev.io.ms[*],awk '/$1/{print $$14; exit}' /proc/partitions
Here is RHEL5
# Disk I/O Stats
UserParameter=custom.vfs.dev.read.ops[*],awk '/$1/{print $$4; exit}' /proc/diskstats
UserParameter=custom.vfs.dev.read.sectors[*],awk '/$1/{print $$6; exit}' /proc/diskstats
UserParameter=custom.vfs.dev.read.ms[*],awk '/$1/{print $$7; exit}' /proc/diskstats
UserParameter=custom.vfs.dev.write.ops[*],awk '/$1/{print $$8; exit}' /proc/diskstats
UserParameter=custom.vfs.dev.write.sectors[*],awk '/$1/{print $$10; exit}' /proc/diskstats
UserParameter=custom.vfs.dev.write.ms[*],awk '/$1/{print $$11; exit}' /proc/diskstats
UserParameter=custom.vfs.dev.io.active[*],awk '/$1/{print $$12; exit}' /proc/diskstats
UserParameter=custom.vfs.dev.io.ms[*],awk '/$1/{print $$13; exit}' /proc/diskstats
And when you make the items in the web front end I believe you must set "Store Value" to "Delta (speed per second)"Comment
-
Hi brendon
I poked around with it on Solaris 9.
The problem is: the smarttools are not useable if you use disk mirroring (in SW).
Reason: the virtual disk does not support SMART (it is a SW disk, finally)
and there is no access to the connected hard disks: Drive busy !
So we use "iostat" on Solaris, now, for disk monitoring.
It'll allow predictive monitoring, etc.
A simple user parameter and you're done.
Regards
Norbert.Comment
-
How to use it on Zabbix front-end?
Hello,
Could you show an example of how to use it in the Zabbix front-end, please?
Thanks in advance.Comment

Comment