Currently, the environment is composed of Zabbix Proxy (6 instances) and a Zabbix Server (1 instance).
The history, history_uint, and trends tables are partitioned by clock on a daily basis.
The minimum clock value present in those tables is 1717167600.
However, Zabbix Server internally executes queries like the following against history / history_uint, which causes long-running queries due to overly broad time ranges:
SELECT clock, ns, value FROM history_uint WHERE itemid = 663187 AND clock > 1689304026 AND clock <= 1749458997;
It is confirmed that this query is not triggered when viewing graphs via the Zabbix frontend, and the query is executed under the user defined in the zabbix_server.conf file.
Why does Zabbix internally issue such queries with unnecessarily wide time ranges, causing long-running queries, even though the minimum data starts from 1717167600?
The issue persists even after restarting the Zabbix server.
These long-running queries are causing service issues within Zabbix.
The history, history_uint, and trends tables are partitioned by clock on a daily basis.
The minimum clock value present in those tables is 1717167600.
However, Zabbix Server internally executes queries like the following against history / history_uint, which causes long-running queries due to overly broad time ranges:
SELECT clock, ns, value FROM history_uint WHERE itemid = 663187 AND clock > 1689304026 AND clock <= 1749458997;
It is confirmed that this query is not triggered when viewing graphs via the Zabbix frontend, and the query is executed under the user defined in the zabbix_server.conf file.
Why does Zabbix internally issue such queries with unnecessarily wide time ranges, causing long-running queries, even though the minimum data starts from 1717167600?
The issue persists even after restarting the Zabbix server.
These long-running queries are causing service issues within Zabbix.
Comment