Ad Widget

Collapse

zabbix_server.log error: Poller #x spent x seconds while updating x values.

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • cybernijntje
    Junior Member
    • Sep 2008
    • 16

    #1

    zabbix_server.log error: Poller #x spent x seconds while updating x values.

    Has anyone seen the same error messages in zabbix_server.log:

    ...
    $ tail -f /var/log/zabbix/zabbix_server.log | grep -i sleep

    5198:20100121:100304.702 Poller #9 spent 0.010161 seconds while updating 1 values. Sleeping for 5 seconds
    5232:20100121:100305.341 Poller #14 spent 10.533652 seconds while updating 36 values. Sleeping for 0 seconds
    5197:20100121:100305.643 Poller #8 spent 0.000323 seconds while updating 0 values. Sleeping for 3 seconds
    5252:20100121:100307.754 Poller #0 spent 0.000324 seconds while updating 0 values. Sleeping for 5 seconds
    5254:20100121:100307.825 Sleeping for 5 seconds
    5199:20100121:100307.865 Poller #10 spent 5.177270 seconds while updating 9 values. Sleeping for 0 seconds
    5199:20100121:100307.875 Poller #10 spent 0.009583 seconds while updating 1 values. Sleeping for 3 seconds
    5239:20100121:100308.101 Poller #16 spent 11.792742 seconds while updating 38 values. Sleeping for 0 seconds
    5238:20100121:100308.221 Poller #15 spent 12.663687 seconds while updating 34 values. Sleeping for 0 seconds
    5200:20100121:100308.448 Poller #11 spent 7.399182 seconds while updating 14 values. Sleeping for 3 seconds
    5240:20100121:100308.626 Poller #17 spent 11.477568 seconds while updating 34 values. Sleeping for 0 seconds
    5241:20100121:100308.827 Poller #18 spent 10.473550 seconds while updating 32 values. Sleeping for 0 seconds
    5232:20100121:100309.538 Poller #14 spent 4.197475 seconds while updating 11 values. Sleeping for 5 seconds
    5191:20100121:100309.669 Poller #2 spent 7.348460 seconds while updating 29 values. Sleeping for 0 seconds
    5191:20100121:100309.699 Poller #2 spent 0.029097 seconds while updating 1 values. Sleeping for 3 seconds
    5201:20100121:100310.164 Poller #12 spent 7.193060 seconds while updating 15 values. Sleeping for 2 seconds
    5242:20100121:100310.173 Poller #19 spent 10.553050 seconds while updating 36 values. Sleeping for 0 seconds
    5210:20100121:100310.384 Poller #13 spent 7.232239 seconds while updating 15 values. Sleeping for 3 seconds
    5189:20100121:100310.518 Poller #0 spent 10.497898 seconds while updating 30 values. Sleeping for 0 seconds
    5193:20100121:100311.289 Poller #4 spent 6.377037 seconds while updating 29 values. Sleeping for 3 seconds
    5253:20100121:100311.309 Sleeping 10 seconds
    5255:20100121:100311.500 Sleeping for 60 seconds
    5192:20100121:100311.701 Poller #3 spent 8.419228 seconds while updating 38 values. Sleeping for 2 seconds
    5240:20100121:100312.003 Poller #17 spent 3.376527 seconds while updating 10 values. Sleeping for 5 seconds
    5239:20100121:100312.230 Poller #16 spent 4.129313 seconds while updating 11 values. Sleeping for 4 seconds
    5252:20100121:100312.756 Poller #0 spent 0.000295 seconds while updating 0 values. Sleeping for 5 seconds
    5254:20100121:100312.828 Sleeping for 5 seconds
    5190:20100121:100312.838 Poller #1 spent 11.707681 seconds while updating 39 values. Sleeping for 0 seconds
    5242:20100121:100313.559 Poller #19 spent 3.385662 seconds while updating 9 values. Sleeping for 5 seconds
    5197:20100121:100313.705 Poller #8 spent 5.061067 seconds while updating 25 values. Sleeping for 5 seconds
    5194:20100121:100313.936 Poller #5 spent 8.423608 seconds while updating 29 values. Sleeping for 2 seconds
    5241:20100121:100314.162 Poller #18 spent 5.335019 seconds while updating 14 values. Sleeping for 4 seconds
    5238:20100121:100314.449 Poller #15 spent 6.227995 seconds while updating 15 values. Sleeping for 1 seconds
    ...


    The error message above (zabbix debugging highest level) keeps coming back!!
    Our zabbix environment is buggy as hell and the server daemon 'hangs' every 20 hours or so :-(
    With 1.6.4 we didn't have any problems...

    Innodb configuration:
    wait_timeout=28800
    connect_timeout=5
    interactive_timeout=28800
    join_buffer_size=1M
    query_cache_size=128M
    query_cache_limit=2M
    max_allowed_packet=16M
    table_cache=1024
    sort_buffer_size=2M
    read_rnd_buffer_size=4M
    sort_buffer_size=8M
    key_buffer = 256M
    key_buffer_size=64M
    innodb_buffer_pool_size = 1000M

    zabbix_server.conf:
    LogFileSize=300
    StartPollers=20



    Regards,
    Dennis

    506 - hosts
    7697 - items
    6885 - triggers
    119.6862 - Required performance
    Last edited by cybernijntje; 21-01-2010, 11:42.
  • nelsonab
    Senior Member
    Zabbix Certified SpecialistZabbix Certified Professional
    • Sep 2006
    • 1233

    #2
    Either I'm blind or I'm missing something. Those are not errors. That is debug information describing how long each poller takes to collect it's items. My only thought is to look into some of the MySQL tuning threads. How is your system doing for memory? The new server code does some caching and may be trying to allocate more memory than you have available.
    RHCE, author of zbxapi
    Ansible, the missing piece (Zabconf 2017): https://www.youtube.com/watch?v=R5T9NidjjDE
    Zabbix and SNMP on Linux (Zabconf 2015): https://www.youtube.com/watch?v=98PEHpLFVHM

    Comment

    • cybernijntje
      Junior Member
      • Sep 2008
      • 16

      #3
      @nelsonab: thanks for the reply!

      We're putting this issue on hold for the time being, since we're experiencing XenServer 5.5 network latency issues (which seem to have been fixed, but requires rebooting production environments so this may take a while...).

      I guess that this caused the 'latency' issues on zabbix as well

      I know, I know... *starts kicking himself continuously*

      Comment

      Working...