We have a zabbix environment with one server and one proxy. Running on version 6.0.29, 1013 hosts and required server performance 1525.53 new values per second. I am new to zabbix and have just added our vcenter host. I added the vcenter host to be monitored by the proxy. The vcenter and the ~20 esxi hosts are found. I have disabled vm discovery since we have agents on all vm´s (although i saw now that some of the vms where we dont have an agent where created as hosts by zabbix). The problem is that we have around 12000 checks in the queue (over 10 minutes) in the proxy. And it does not decrease. If i look at a metric from a random esxi host (the symptoms are the same for all hosts which where found by the vcenter discovery rules) i can see that zabbix collects a couple of metrics every hour, but then it takes an hour or two until metrics are collected the next time. The interval for collection is configured to 1 minute:
2024-10-28 09:57:36 4.5
2024-10-28 08:03:36 4.29
2024-10-28 08:02:36 4.35
2024-10-28 08:01:36 4.65
2024-10-28 08:00:36 4.33
2024-10-28 07:59:36 4.22
2024-10-28 07:58:36 4.18
2024-10-28 07:57:36 4.25
2024-10-28 06:03:36 4.77
2024-10-28 06:02:36 4.96
I guess this is because of the queues but i cannot find a single performance metric i zabbix server that can points to the problem. All metric looks good to me. I now ask for some advice on what to look for in zabbix to be able to find the root cause of this queue problem
2024-10-28 09:57:36 4.5
2024-10-28 08:03:36 4.29
2024-10-28 08:02:36 4.35
2024-10-28 08:01:36 4.65
2024-10-28 08:00:36 4.33
2024-10-28 07:59:36 4.22
2024-10-28 07:58:36 4.18
2024-10-28 07:57:36 4.25
2024-10-28 06:03:36 4.77
2024-10-28 06:02:36 4.96
I guess this is because of the queues but i cannot find a single performance metric i zabbix server that can points to the problem. All metric looks good to me. I now ask for some advice on what to look for in zabbix to be able to find the root cause of this queue problem
Comment