Hi everyone,
First of all, my Zabbix environment:
-Version 6.0.2. Currently upgraded to 6.0.8
-7660 items, 432 hosts, 5884 triggers. Approx 80 nvps
-Hosted on Hyper-V VM: 8 GB RAM + 4 cores. Frontend, zabbix-server and mariaDB 10.6.7 on the same VM. Database on a separate partition
-OS: Ubuntu Server 20.04
Initially had the problem "Pinger processes = 100%", which used the CPU up to 100% all the time, there was no way to lower the value. When I tried to restart the server, my database broke down, as I couldn't start mariadb daemon anymore. The problem seemed to be a slow query, which would never end and slow down the server a lot.
As I didn't want to loose my maps, the solution which worked was deleting all the custom templates and unlinking all the hosts. Then, import all the custom templates and apply them to the hosts.
After that, another problem came up: the Housekeeper process, which shoots up the RAM usage up to 100%, and makes the OS to kill zabbix-server process. If I leave the RAM dynamic for the VM, the consumption shoots up to 40-50 GB of RAM.
As result, my graphs have spaces approx. every 30 min (please see the screenshots).
What I tried to resolve the problem:
-Tuned mariadb settings:
Currently default, only modified MaxHousekeeperDelete=50000, but it had no result.
-Updated to 6.0.8
As I understand, with my current data load, I shouldn't be having problems with housekeeper process. Is that correct? If yes, what could be solution for this problem? Any help would be really appreciated.
RAM problem:

VM kills the process al the time:
Screenshots and zabbix_server.log:
https://www.mediafire.com/file/ytwo2...erver.log/file
Regards
Andrey.
First of all, my Zabbix environment:
-Version 6.0.2. Currently upgraded to 6.0.8
-7660 items, 432 hosts, 5884 triggers. Approx 80 nvps
-Hosted on Hyper-V VM: 8 GB RAM + 4 cores. Frontend, zabbix-server and mariaDB 10.6.7 on the same VM. Database on a separate partition
-OS: Ubuntu Server 20.04
Initially had the problem "Pinger processes = 100%", which used the CPU up to 100% all the time, there was no way to lower the value. When I tried to restart the server, my database broke down, as I couldn't start mariadb daemon anymore. The problem seemed to be a slow query, which would never end and slow down the server a lot.
As I didn't want to loose my maps, the solution which worked was deleting all the custom templates and unlinking all the hosts. Then, import all the custom templates and apply them to the hosts.
After that, another problem came up: the Housekeeper process, which shoots up the RAM usage up to 100%, and makes the OS to kill zabbix-server process. If I leave the RAM dynamic for the VM, the consumption shoots up to 40-50 GB of RAM.
As result, my graphs have spaces approx. every 30 min (please see the screenshots).
What I tried to resolve the problem:
-Tuned mariadb settings:
innodb_buffer_pool_size=4G
innodb_log_file_size=1G
max_connections=200
tmp_table_size=512M
max_heap_table_size=512M
query_cache_size=128M
skip-name-resolve
#Interrupt idle connections
wait_timeout=120
[HASHTAG="t178"]enable[/HASHTAG] slow query log
long-query-time=3
slow-query-log=1
slow-query-log-file=/var/log/mariadb-slow-query.log
-Tried to change Housekeeper parameters, which had no result. When I turn it off, zabbix-server is stable, but it is not a solution. I don't want my BD to grow endlessly...:innodb_log_file_size=1G
max_connections=200
tmp_table_size=512M
max_heap_table_size=512M
query_cache_size=128M
skip-name-resolve
#Interrupt idle connections
wait_timeout=120
[HASHTAG="t178"]enable[/HASHTAG] slow query log
long-query-time=3
slow-query-log=1
slow-query-log-file=/var/log/mariadb-slow-query.log
Currently default, only modified MaxHousekeeperDelete=50000, but it had no result.
-Updated to 6.0.8
As I understand, with my current data load, I shouldn't be having problems with housekeeper process. Is that correct? If yes, what could be solution for this problem? Any help would be really appreciated.
RAM problem:
VM kills the process al the time:
Screenshots and zabbix_server.log:
https://www.mediafire.com/file/ytwo2...erver.log/file
Regards
Andrey.