The situation is as follows:
1. Bare metal VM host runs Centos 8 (i/o is consumer level HDD RAID)
2. VM1 on this node runs Centos 8
3. VM2 on this node runs Centos 7
I monitor these using "Template Module Linux block devices by Zabbix agent active". For all nodes (not only the one above) running Centos 7 or Ubuntu 16.x, the values for vfs.dev.(read|write).await are sensible.
For the nodes on Centos 8, the numbers seem inflated:
- The bare metal VM Host by a factor of 10.
- The VM Guest by a factor of 100.
See attached screenshot of the average numbers.
It seems to me that either:
1. the Centos 8 numbers in /proc/diskstats are calculated differently, which results in inflated numbers
2. Zabbix does something wrong on Centos 8
I'm not sure which one it is.
Are others seeing this difference too?
1. Bare metal VM host runs Centos 8 (i/o is consumer level HDD RAID)
2. VM1 on this node runs Centos 8
3. VM2 on this node runs Centos 7
I monitor these using "Template Module Linux block devices by Zabbix agent active". For all nodes (not only the one above) running Centos 7 or Ubuntu 16.x, the values for vfs.dev.(read|write).await are sensible.
For the nodes on Centos 8, the numbers seem inflated:
- The bare metal VM Host by a factor of 10.
- The VM Guest by a factor of 100.
See attached screenshot of the average numbers.
It seems to me that either:
1. the Centos 8 numbers in /proc/diskstats are calculated differently, which results in inflated numbers
2. Zabbix does something wrong on Centos 8
I'm not sure which one it is.
Are others seeing this difference too?