We are running Zabbix v6.2.4 against a Postgres 14.4 backend with timescaledb v2. Both Zabbix & Postgres are running on the same RHEL8 server which has 32GB of RAM.
The server is not yet in production and is only monitoring about 30 hosts. Values per second is 22.
We have an issue where the memory usage of the postgres process associated with the lld worker just keeps increasing over a few days until a point is reached when all available memory is consumed and postgres aborts the connection, which causes the zabbix server service to restart.
In an attempt to alleviate the issue I have reduced the number of lld workers from four down to one. However the issue remains, with that single process consuming more than 8GB of memory at the point the service restarts. Postgres shared_buffers is set to 6GB and work_mem set to 32MB.
Would introducing a connection pooler (e.g. pgbouncer) and enforcing re-connections when they become idle eliminate the issue?
The server is not yet in production and is only monitoring about 30 hosts. Values per second is 22.
We have an issue where the memory usage of the postgres process associated with the lld worker just keeps increasing over a few days until a point is reached when all available memory is consumed and postgres aborts the connection, which causes the zabbix server service to restart.
In an attempt to alleviate the issue I have reduced the number of lld workers from four down to one. However the issue remains, with that single process consuming more than 8GB of memory at the point the service restarts. Postgres shared_buffers is set to 6GB and work_mem set to 32MB.
Would introducing a connection pooler (e.g. pgbouncer) and enforcing re-connections when they become idle eliminate the issue?
Comment