Ad Widget

Collapse

Zabbix 3.0.1 History Syncer always 100%

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • buzzerbeater
    Junior Member
    • Nov 2015
    • 8

    #1

    Zabbix 3.0.1 History Syncer always 100%

    Hi, all,

    Recently I have install zabbix 3.0.1 and whenever I added a new host, my default 4 history syncers would got 100% busy,

    ps -ef | grep zabbix_server | grep "history syncer"
    zabbix 36255 35999 0 11:35 ? 00:01:52 /usr/sbin/zabbix_server: history syncer #1 [synced 1514 items in 102.664275 sec, syncing history]
    zabbix 36256 35999 0 11:35 ? 00:01:19 /usr/sbin/zabbix_server: history syncer #2 [synced 950 items in 275.989618 sec, syncing history]
    zabbix 36258 35999 0 11:35 ? 00:01:35 /usr/sbin/zabbix_server: history syncer #3 [synced 938 items in 171.130869 sec, syncing history]
    zabbix 36260 35999 0 11:35 ? 00:02:07 /usr/sbin/zabbix_server: history syncer #4 [synced 743 items in 155.760232 sec, syncing history]

    It takes three hours to get back to less than 1%. As I have to add 100+ hosts, the process is way too long.

    My environment is as follows: Dell PowerEdge R430 , 64GB RAM, SSD 4x400GB
    MySQL 5.7
    [mysqld]
    #
    # Remove leading # and set to the amount of RAM for the most important data
    # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
    # innodb_buffer_pool_size = 128M
    #
    # Remove leading # to turn on a very important data integrity option: logging
    # changes to the binary log between backups.
    # log_bin
    #
    # Remove leading # to set options mainly useful for reporting servers.
    # The server defaults are faster for transactions and fast SELECTs.
    # Adjust sizes as needed, experiment to find the optimal values.
    # join_buffer_size = 128M
    # sort_buffer_size = 2M
    # read_rnd_buffer_size = 2M
    datadir=/var/lib/mysql
    socket=/var/lib/mysql/mysql.sock

    max_connections = 2048
    # Disabling symbolic-links is recommended to prevent assorted security risks
    symbolic-links=0

    innodb_file_per_table=1
    innodb_buffer_pool_size = 40G
    innodb_log_file_size=64M
    innodb-flush-log-at-trx-commit = 0
    log-error=/var/log/mysqld.log
    pid-file=/var/run/mysqld/mysqld.pid

    CacheSize=2G
    HistoryCacheSize=2G
    HistoryIndexCacheSize=2G
    TrendCacheSize=2G
    ValueCacheSize=8G

    Can anyone help me about this issue? Appreciate advice so much!
  • kloczek
    Senior Member
    • Jun 2006
    • 1771

    #2
    Originally posted by buzzerbeater
    It takes three hours to get back to less than 1%. As I have to add 100+ hosts, the process is way too long.
    Go to Administration -> proxies and check "Required performance" column.
    If you have more than >=1knvps it is possible that you've reached maximum (hardcoded in zabbix proxy source code) number of monitoring points send in single batch of date in svr<>prx communication.

    Current limit is hardcoded in #define include/proxy.h::ZBX_MAX_HRECORDS and is 1000 precisely (i.e. I'm using 5k limit).

    I remember that zabbix dev team had some plans to change this hardcoded limit to proxy settings variable but I'm not sure did they had time to implement this.
    http://uk.linkedin.com/pub/tomasz-k%...zko/6/940/430/
    https://kloczek.wordpress.com/
    zapish - Zabbix API SHell binding https://github.com/kloczek/zapish
    My zabbix templates https://github.com/kloczek/zabbix-templates

    Comment

    • buzzerbeater
      Junior Member
      • Nov 2015
      • 8

      #3
      Hi, kloczek,

      First of all, thank you for the response.

      Now I have 49816 items, 21819 triggers and 754.23 nvps.

      I'm not using proxies, do I need to change the hardcoded limit?

      Appreciate your advice.

      Comment

      • kloczek
        Senior Member
        • Jun 2006
        • 1771

        #4
        Originally posted by buzzerbeater
        I'm not using proxies, do I need to change the hardcoded limit?
        A .. so in this case more possible is that you are still using passive monitoring



        BTW using proxy: you should switch to monitor ALL your host over even one proxy (nadd/or none ot he host should be monitored over srv). It should be used always even in smallest zabbix envs because it allows you to restart server or schedule srv downtime without loosing monitoring data and by this it adds to whole zabbix stack HA.
        http://uk.linkedin.com/pub/tomasz-k%...zko/6/940/430/
        https://kloczek.wordpress.com/
        zapish - Zabbix API SHell binding https://github.com/kloczek/zapish
        My zabbix templates https://github.com/kloczek/zabbix-templates

        Comment

        • buzzerbeater
          Junior Member
          • Nov 2015
          • 8

          #5
          Hi, kloczek,

          I am using SNMPv2 agents to monitor our cisco devices, most of the items are the interface statistics, I don't have such problems in zabbix 2.4. Any ideas about the problem?

          Your advice about proxy is very impressive, I am drafting a plan to set up proxies

          Thank you very much.

          Comment

          • kloczek
            Senior Member
            • Jun 2006
            • 1771

            #6
            Originally posted by buzzerbeater
            I am using SNMPv2 agents to monitor our cisco devices, most of the items are the interface statistics, I don't have such problems in zabbix 2.4. Any ideas about the problem?
            As far as I remember at the moment using SNMP bulk requests is enabled by default (check in your switch host "interface" do you have "Use bulk requests" enabled) so probably you cannot speedup to much here.
            However even if you have srv and prx running on the same system (even with additional small MySQL DB backend dedicated for prx) doing SNMP monitoring over proxy should decrease CPU utilization.

            Your advice about proxy is very impressive, I am drafting a plan to set up proxies
            Main goal on using prx even in smallest monitored envs is to have additional HA protection.
            Even if it would consume a bit more resources (memory, CPI, IOs) IMO it is worth to have everything (except zabbix server internal checks) monitored over proxy/proxies.
            In future when whole stack will be growing adding more proxies will be natural/non-problematic step
            Prx with compiled in mysql support with only 2GB innodb pool can handle few thousands nvps and will consume not more than 1GB storage.

            Really separating collecting data from zabbix/snmp agents (ov. zbx prx) and in other process interacting with DB backend and web/API frontends (zbx srv) makes IMO whole zabbix stack much more predictable/flexible and after this it is easier to scale it up in future.

            It is good to add separated IPs used by web frontend, srv, prx and prx and main DB backends as well from beginning .. even if everything can work on single system (just as additional IP aliases) because when resources utilization will be growing with growing size monitored env moving some parts to dedicated systems for example web frontend or main DB backend will be only moving those IPs to new hosts and none of other changes will be necessary to do when it will come the time when it will be needed
            http://uk.linkedin.com/pub/tomasz-k%...zko/6/940/430/
            https://kloczek.wordpress.com/
            zapish - Zabbix API SHell binding https://github.com/kloczek/zapish
            My zabbix templates https://github.com/kloczek/zabbix-templates

            Comment

            • buzzerbeater
              Junior Member
              • Nov 2015
              • 8

              #7
              Hi, kloczek

              Really appreciate your experience and advice. Thank you.

              Comment

              Working...