Hello Fellows,
We have a zabbix server that is experiencing very high CPU consumption due to the mariadb service. This server (an AWS instance) was recently installed and it´s database was migrated from a previous version, as follows:
Previous setup – CENTOS 7 + Zabbix Server 5.0.15 + MariaDB dist 5.5.60 + PHP 7.2.34
New setup – CENTOS 8 + Zabbix Server 5.4 + MariaDB dist 10.5 + PHP 7.3
Once installed and migrated, the new server showing constant high CPU usage, even causing the server to crash. We than increased the server´s RAM as a workaround and increased the innodb_buffer_pool_size.
The presente hardware setup is: 2 cpu, 8GB RAM and 60GB disc
The Zabbix Server has around 69 active hosts, 4813 active items and 147nvps.
So far it is holding, but we still see CPU peaking frequently 80%-100% due to mariadb activity.

We made some more fine tuning adjustments to the mariadb configuration in na attempt to increase it´s performance, but it seemed to have had little effect. These adjustments are as follows:
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
max_connections = 400
max_allowed_packet=1024M
thread_pool_idle_timeout=7200
wait_timeout=120
interactive_timeout=300
innodb_buffer_pool_instances=2
innodb_buffer_pool_size=4831838208
innodb_file_per_table
innodb_log_file_size=64M
join_buffer_size=256k
read_buffer_size=256k
read_rnd_buffer_size=256k
table_cache=256
tmp_table_size=256M
max_heap_table_size=256M
query_cache_size=1M
thread_cache_size=300
[mariadb]
log_error=/var/log/mariadb/error.log
#log_queries_not_using_indexes=/var/log/mariadb/not_using_indexes.log
slow_query_log=ON
slow_query_log_file=/var/log/mariadb/mariadb-slow.log
Also, we activated the slow query log and are seeing a recurrent query that takes 10-15 seconds to complete. Follows the query below:
# User@Host: zabbix[zabbix] @ localhost []
# Thread_id: 858953 Schema: zabbix QC_hit: No
# Query_time: 22.167967 Lock_time: 0.000246 Rows_sent: 25 Rows_examined: 4003131
# Rows_affected: 0 Bytes_sent: 3134
SET timestamp=1635358056;
SELECT a.alertid,a.clock,a.sendto,a.subject,a.message,a.s tatus,a.retries,a.error,a.userid,a.actionid,a.medi atypeid,a.alerttype FROM alerts a WHERE EXISTS (SELECT NULL FROM events e,functions f,items i,hosts_groups hgg JOIN rights r ON r.id=hgg.groupid AND r.groupid=14 WHERE a.eventid=e.eventid AND e.objectid=f.triggerid AND f.itemid=i.itemid AND i.hostid=hgg.hostid GROUP BY f.triggerid HAVING MIN(r.permission)>0 AND MAX(r.permission)>='2') AND EXISTS (SELECT NULL FROM actions aa WHERE a.actionid=aa.actionid AND aa.eventsource='0') AND (a.userid IS NULL OR EXISTS (SELECT NULL FROM users_groups ug WHERE ug.userid=a.userid AND ug.usrgrpid=14)) ORDER BY a.clock DESC LIMIT 25;
Considering the scenario described above, could anyone kindly confirm:
- Is this query is running as expected? Does it have any influence on the problem reported?
- Is there anymore fine tuning that can be done to mariadb in order to optimize it´s CPU performance? Is there anything inaccurately configured?
Any help will be very appreciated!
Thanks in advance.
Best regards!
We have a zabbix server that is experiencing very high CPU consumption due to the mariadb service. This server (an AWS instance) was recently installed and it´s database was migrated from a previous version, as follows:
Previous setup – CENTOS 7 + Zabbix Server 5.0.15 + MariaDB dist 5.5.60 + PHP 7.2.34
New setup – CENTOS 8 + Zabbix Server 5.4 + MariaDB dist 10.5 + PHP 7.3
Once installed and migrated, the new server showing constant high CPU usage, even causing the server to crash. We than increased the server´s RAM as a workaround and increased the innodb_buffer_pool_size.
The presente hardware setup is: 2 cpu, 8GB RAM and 60GB disc
The Zabbix Server has around 69 active hosts, 4813 active items and 147nvps.
So far it is holding, but we still see CPU peaking frequently 80%-100% due to mariadb activity.
We made some more fine tuning adjustments to the mariadb configuration in na attempt to increase it´s performance, but it seemed to have had little effect. These adjustments are as follows:
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
max_connections = 400
max_allowed_packet=1024M
thread_pool_idle_timeout=7200
wait_timeout=120
interactive_timeout=300
innodb_buffer_pool_instances=2
innodb_buffer_pool_size=4831838208
innodb_file_per_table
innodb_log_file_size=64M
join_buffer_size=256k
read_buffer_size=256k
read_rnd_buffer_size=256k
table_cache=256
tmp_table_size=256M
max_heap_table_size=256M
query_cache_size=1M
thread_cache_size=300
[mariadb]
log_error=/var/log/mariadb/error.log
#log_queries_not_using_indexes=/var/log/mariadb/not_using_indexes.log
slow_query_log=ON
slow_query_log_file=/var/log/mariadb/mariadb-slow.log
Also, we activated the slow query log and are seeing a recurrent query that takes 10-15 seconds to complete. Follows the query below:
# User@Host: zabbix[zabbix] @ localhost []
# Thread_id: 858953 Schema: zabbix QC_hit: No
# Query_time: 22.167967 Lock_time: 0.000246 Rows_sent: 25 Rows_examined: 4003131
# Rows_affected: 0 Bytes_sent: 3134
SET timestamp=1635358056;
SELECT a.alertid,a.clock,a.sendto,a.subject,a.message,a.s tatus,a.retries,a.error,a.userid,a.actionid,a.medi atypeid,a.alerttype FROM alerts a WHERE EXISTS (SELECT NULL FROM events e,functions f,items i,hosts_groups hgg JOIN rights r ON r.id=hgg.groupid AND r.groupid=14 WHERE a.eventid=e.eventid AND e.objectid=f.triggerid AND f.itemid=i.itemid AND i.hostid=hgg.hostid GROUP BY f.triggerid HAVING MIN(r.permission)>0 AND MAX(r.permission)>='2') AND EXISTS (SELECT NULL FROM actions aa WHERE a.actionid=aa.actionid AND aa.eventsource='0') AND (a.userid IS NULL OR EXISTS (SELECT NULL FROM users_groups ug WHERE ug.userid=a.userid AND ug.usrgrpid=14)) ORDER BY a.clock DESC LIMIT 25;
Considering the scenario described above, could anyone kindly confirm:
- Is this query is running as expected? Does it have any influence on the problem reported?
- Is there anymore fine tuning that can be done to mariadb in order to optimize it´s CPU performance? Is there anything inaccurately configured?
Any help will be very appreciated!
Thanks in advance.
Best regards!
Comment