Hello
I have a problem with my zabbix server CPU high load which is propably caused by high usage of preprocessing pollers.
Info about my Zabbix environment:
- Zabbix Server 5.4.10
- DB - postgresql
- Number of hosts (enabled/disabled) 4066
- Required server performance, new values per second 897.87
- im not using proxies, i have multiple network interfaces in my Zabbix server,
- HA - no
HW:
- CPU: 16 cores
- memory: 64 GB
- HD - 30 GB
Zabbix-server configuration:
- DebugLevel=2
- StartPollers=150
- StartIPMIPollers=2
- StartPreprocessors=180
- StartPollersUnreachable=30
- StartTrappers=30
- StartPingers=50
- StartDiscoverers=2
- StartHTTPPollers=50
- StartTimers=3
- StartAlerters=2
- StartJavaPollers=2
- StartVMwareCollectors=5
- VMwareFrequency=21600
- VMwarePerfFrequency=21600
- VMwareCacheSize=2G
- VMwareTimeout=10
- StartSNMPTrapper=1
- HousekeepingFrequency=1
- CacheSize=4G
- CacheUpdateFrequency=60
- StartDBSyncers=4
- HistoryCacheSize=1G
- HistoryIndexCacheSize=1G
- TrendCacheSize=1G
- ValueCacheSize=12G
- Timeout=25
- TrapperTimeout=30
- LogSlowQueries=3000
- StartProxyPollers=0
Few days ago I noticed on my Zabbix server a high cpu utilization so i started to verify this situation. I started to monitorg my cpu load utilization and processes on server which are generating this load and I noticed the preprocessing worker's and preprocessing manager are responsible for this situation.
At the beginning i added a 8 cores of CPU to server - there was 8, now I have 16 and it won't help. Next I started to manipulate a values in zabbix_server.config - I set the "StartPollers" value to 500 (before there was a 300 value) but my server would not start.
Above i posted my actual zbx-server configuration and now it's working but still I'm noticing the problem with high cpu utilization. Below I'm adding some screen shots from my Zbx server.
-------------
Cpu load - last 30 days

vSphere - last week metrics of my Zabbix server:

Htop result:

I read about Zabbix-server performance tuning: https://www.zabbix.com/documentation...ormance_tuning , so what i did:
- I decreased the number of StartPollers to 150,
- I decreased then namber of other *Pollers - my actual config is above
- I see nothing in my log file. Is it safe to change debugLog lvl to 3 or even to 4 ? I know it can generate a big usage of space on my disks...
There is an information: "Optimal number of instances is achieved when the item queue, on average, contains minimum number of parameters (ideally, 0 at any given moment). This value can be monitored by using internal check zabbix[queue]." In my case it looks bad ...


Could you please help me verify why preprocessing pollers are flapping in this way and help me set the appropriate values for my Zabbix environment ? Is it possible to verify which items are using preprocessing and are "heavy" to preprocess by Zabbix ?
It's hard to find a good informations about how to set all vaules and prepare your Zabbix server for working fine. All answers are appreciated.
Thanks in advance!
I have a problem with my zabbix server CPU high load which is propably caused by high usage of preprocessing pollers.
Info about my Zabbix environment:
- Zabbix Server 5.4.10
- DB - postgresql
- Number of hosts (enabled/disabled) 4066
- Required server performance, new values per second 897.87
- im not using proxies, i have multiple network interfaces in my Zabbix server,
- HA - no
HW:
- CPU: 16 cores
- memory: 64 GB
- HD - 30 GB
Zabbix-server configuration:
- DebugLevel=2
- StartPollers=150
- StartIPMIPollers=2
- StartPreprocessors=180
- StartPollersUnreachable=30
- StartTrappers=30
- StartPingers=50
- StartDiscoverers=2
- StartHTTPPollers=50
- StartTimers=3
- StartAlerters=2
- StartJavaPollers=2
- StartVMwareCollectors=5
- VMwareFrequency=21600
- VMwarePerfFrequency=21600
- VMwareCacheSize=2G
- VMwareTimeout=10
- StartSNMPTrapper=1
- HousekeepingFrequency=1
- CacheSize=4G
- CacheUpdateFrequency=60
- StartDBSyncers=4
- HistoryCacheSize=1G
- HistoryIndexCacheSize=1G
- TrendCacheSize=1G
- ValueCacheSize=12G
- Timeout=25
- TrapperTimeout=30
- LogSlowQueries=3000
- StartProxyPollers=0
Few days ago I noticed on my Zabbix server a high cpu utilization so i started to verify this situation. I started to monitorg my cpu load utilization and processes on server which are generating this load and I noticed the preprocessing worker's and preprocessing manager are responsible for this situation.
At the beginning i added a 8 cores of CPU to server - there was 8, now I have 16 and it won't help. Next I started to manipulate a values in zabbix_server.config - I set the "StartPollers" value to 500 (before there was a 300 value) but my server would not start.
Above i posted my actual zbx-server configuration and now it's working but still I'm noticing the problem with high cpu utilization. Below I'm adding some screen shots from my Zbx server.
-------------
Cpu load - last 30 days
vSphere - last week metrics of my Zabbix server:
Htop result:
I read about Zabbix-server performance tuning: https://www.zabbix.com/documentation...ormance_tuning , so what i did:
- I decreased the number of StartPollers to 150,
- I decreased then namber of other *Pollers - my actual config is above
- I see nothing in my log file. Is it safe to change debugLog lvl to 3 or even to 4 ? I know it can generate a big usage of space on my disks...
There is an information: "Optimal number of instances is achieved when the item queue, on average, contains minimum number of parameters (ideally, 0 at any given moment). This value can be monitored by using internal check zabbix[queue]." In my case it looks bad ...
Could you please help me verify why preprocessing pollers are flapping in this way and help me set the appropriate values for my Zabbix environment ? Is it possible to verify which items are using preprocessing and are "heavy" to preprocess by Zabbix ?
It's hard to find a good informations about how to set all vaules and prepare your Zabbix server for working fine. All answers are appreciated.
Thanks in advance!
Now we are planning how to use proxies in the future.
Comment