Ad Widget
Collapse
vfs.dev.read broken in Linux
Collapse
X
-
-
They changed the syntax in versions above 1.4.x
Use this syntax for the item:Same problem here:
i have compiled agent 1.6.8, 1.8.4, 1.8.9 and 1.8.4 and when i execute the following command i get a [m|Collector is not started!] message
./zabbix_agentd -t vfs.dev.read[sda]
The same command on a fresh compiled zabbix agent 1.4.6 works perfect:
Does anybody know whats wrong? I monitor 250 hosts, and all the 1.6.x agents don't return any data.
Key = vfs.dev.read[sda,sectors]
Type = Numeric (unsigned)
Data type = Decimal
Units = B/s
Use custom multiplier = 512
Store Value = Delta (speed per second)
Show Value = As is
Why they broke out of the box functionality is completely beyond me. It must be configured by hand, post install.Comment
-
i thought you gonna say that ;-)
ok:
zbx_agent version: 1.4.5
zabbix_get version: 1.8.4
command: ./zabbix_get -s myhostname -k vfs.dev.write[sda]
result: 9590584
zbx_agent version: 1.6.8
zabbix_get version: 1.8.4
command: ./zabbix_get -s myhostname -k vfs.dev.write[sda]
result: 10.266667
zbx_agent version: 1.8.4
zabbix_get version: 1.8.4
command: ./zabbix_get -s myhostname -k vfs.dev.write[sda]
result: 10.212
It looks like the format result for agent 1.4.5. differs from 1.6 and 1.8Comment
-
most likely not the format, but default parameters used - sectors (which is a counter) vs <something> per second. try to pass parameter to specify type, value range should be more uniform thenComment
-
Hi,
I am also trying to monitor the Disk Read and write data on the host.
I have created the items like this.
1. For Writing data: vfs.dev.write[all,ops,avg1]
2. For Read data : vfs.dev.read[all,ops,avg1]
Problem is I am able see the write data, but read data is always zero(0)
What could be the problem, is there issue in the configuration. please see the item creation page and the data graph
please find the attached screens:Comment
-
What version of the agent are you using?
I just tested this with 2.0.6 on my Zabbix DB server and am getting both values.
I changed my key from what you have though. I simply went with these:
vfs.dev.read[,ops,]
vfs.dev.write[,ops,]
I left out "all" on purpose, as that is the default.
(This is specific to agent 2.x)
As an FYI, also see the notes at the very bottom of that page.Comment
-
Hi tchjts1,
Thank for very quick response.
I am using the zabbix-2.0.4 vesrsion.
And I have tried using the Item for read: vfs.dev.read[,ops,]
Still there is no data. I could see only zero all the time.
But write data is clearly varying across the time.
I have no Idea on this why it is behaving.
And while I am going to through the zabbix forums, I found this link.
if i follow or modify the zabbix-agentd will it work, please see the below link:
Thank you once again.
:-)Comment
-
Interesting. That info is about 5 years old, but I think it would still be fine as it is a userparameter. I'll give it a try when I get into the office later today and let you know how it works out.
I have a lot of these set up for Solaris (Disk IO stats) and would like to beef up my Linux templates as well.Comment
-
So I was able to import the template by unchecking the "graphs" option. Import complained about that. Added the userparms to my agentd.conf and was able to pull data. Only issue I see is that you have to specify , [sdb], [sdc], [sda1], etc. There is no "all" functionality that I am aware of.Comment
-
Hi,
I have tried by adding the custom parameteres to zabbix-agentd config file
and restarted my zabbix agent
After that I have tried connecting throguh zabbix as well as on zabbix server by running the following command
1. On zabbix UI:
custom.vfs.dev.write.ops[sdb]
2. on zabbix server:
zabbix_get -s ec2-54-241-24-225.us-west-1.compute.amazonaws.com -p 10050 -k custom.vfs.dev.read.sectors[sdb]
I did not get any result for this. NOT supported excetpionComment
-
You may not have an sdb partition.
On your linux box you can run: sudo fdisk -l
Which will give you a list.
It'll look something like this
Code:Disk /dev/sda: 73.2 GB, 73284976640 bytes 255 heads, 63 sectors/track, 8909 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 1580 12586927+ 83 Linux /dev/sda3 1581 8778 57817935 fd Linux raid autodetect /dev/sda4 8779 8909 1052257+ 5 Extended /dev/sda5 8779 8909 1052226 82 Linux swap / Solaris Disk /dev/sdb: 73.2 GB, 73284976640 bytes 255 heads, 63 sectors/track, 8909 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Comment
-
Hi tchjts1,
Sorry for late reply.
Here I have checked out the partition, using this command sudo fdisk -l
I did not get exactly what extract, why its not working for me.
Can you please guide me? how I need to fetch the read property data?
============
Disk /dev/xvda1: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/xvda1 doesn't contain a valid partition table
Disk /dev/xvdb: 450.9 GB, 450934865920 bytes
255 heads, 63 sectors/track, 54823 cylinders, total 880732160 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/xvdb doesn't contain a valid partition tableComment
-
The information is there.
So you have xvda1 and xvdb whereas I have sda1 and sda2.
Now if you go to that template that you imported and make the items with your information, attach that template to the server you got that info from, you should start seeing data coming in.
But this is where the issue lies, in that you have to be specific. There is no "all" capability that I am aware of for this. You would have to manage these items on a server-by-server basis. Very much a pain, and having a template is useless because managing items per-server defeats the purpose of a template.
My items look like this using sda1: (Change your "interval" to 60. the default of "10" is too often in my opinion)Last edited by tchjts1; 16-05-2013, 23:53.Comment

Comment