Ad Widget

Collapse

Zabbix high IO and disk utilization

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • gelny
    Junior Member
    • Feb 2021
    • 8

    #1

    Zabbix high IO and disk utilization

    Hi everyone,
    few weeks ago our zabbix server starts to trigger all hosts unavailable. After few minutes everythings goes back to normal, but this situation occurs multiple times a day. I found out that virtual machine with zabbix uses a lot of disk i/o with 20 -30 MB/s read/write on average. Because this was old zabbix version on old Ubuntu version, I prepared new virtual machine (Ubuntu 24 LTS), installed latest Zabbix and moved database to this new enviroment. After that i upgraded all proxies to latest version and then every Windows Agent used. Disk utilization dropped, but still it is about 10-15 MB/s. CPU and memory utilization is OK. Virtual machine configuration:
    • 4x vCPU @ 2.20GHz
    • 8GB RAM
    • 150GB HDD on RAID1 storage with 10k SAS drives
    We have about 30 proxies connected to our Zabbix server and total of 183.15 new values per second. I am constantly getting trigger Zabbix history syncer processes more than 75% busy (multiple times a day, usually lasting about 5 - 15 minutes) and trigger Zabbix housekeeper processes more than 75% busy which everytime last for few hours until cancelled. Also Disk I/O is overloaded on Zabbix server is triggered for 6 days now (graph in attachement).
    My question is if my hardware is slow and I need to use SSD driver, or if there is any configuration possible to drop disk I/O and get Zabbix server run smoothly. I can post my Zabbix configuration if needed.

    Thanks in advance for every advice.

    Kind regards Ondra
    Attached Files
  • cyber
    Senior Member
    Zabbix Certified SpecialistZabbix Certified Professional
    • Dec 2006
    • 4807

    #2
    Your average laptop has more computing power, than your server.. If it is an all-in-one setup, then it probably struggles. Each component requires some of that (little) memory and cpu and they have a serious fight there... Base of smoothly running Zabbix is well performing DB. I think you do not have enough resources to run it all in one.

    Comment

    • gelny
      Junior Member
      • Feb 2021
      • 8

      #3
      Thank youfor your answer, but IMHO CPU or RAM is not the problem, because CPU utilization is constantly about 30% and I have 50% of ram free. But just to be sure I added four more vCPU and 12GB RAM to Zabbix virtual machine. But I don't see any performance change after this. I uploaded ubuntu resources screenshot, maybe it will help. What I find shocking is data writen. In less then hour of Zabbix server runtime there is 20GB written to disk and I think it is too much. But I am unable to find which process is doing the writing. I don't think that all praxies and hosts monitored will generate that amount of new data in less than one hour. Any thoughts?
      Attached Files

      Comment

      • cyber
        Senior Member
        Zabbix Certified SpecialistZabbix Certified Professional
        • Dec 2006
        • 4807

        #4
        OK.. you added some resources. You say, history syncer is very busy. What history syncer does? A load of DB queries... I think you did not mention, what DB you are using there, but I think you should also look for DB parameters, how much memory DB can use. how much for some temp memory usage etc... I'm sorry, I am not a DBA, do not know all those parameters by heart.. But I guess with some googling you can find out, what may need soma adjustment. There are some webpages to calcualte it... https://www.mysqlcalculator.com/ (at least it lists parameters that affect memory usage, you may want to look up their actual meaning.. ) . There are lists of parameters for PG aswell...
        With lot of writing.. you probably need to track, is it logs or DB... not much more there, that can use up that much diskspace...

        Comment

        • gelny
          Junior Member
          • Feb 2021
          • 8

          #5
          Looks like you pointed me in the right direction. Thanks a lot for that. It seems that even there was a lot of memory available, mariadb was not configured to use it. After I tweaked some cnf parameters, it consumed much more memory and disk I/O decreased, which is exactly the scenario I was looking for.
          Once again - thank you very much for your help.
          For those who is dealing with similar issue - key parameter to edit was innodb_buffer_pool_size which was set only to 512M. I changed it to 12G and everything started to run smoothly.

          Comment

          Working...