Ad Widget

Collapse

Elasticsearch Monitoring causes Preprocessing Manager Queue

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • XPystchrisX
    Junior Member
    • Sep 2013
    • 4

    #1

    Elasticsearch Monitoring causes Preprocessing Manager Queue

    Zabbix 6.0.3 from packages on Ubuntu 18.04 with PostgreSQL 13 and TimescaleDB 2.6
    Proxy performing monitoring on separate ubuntu 18.04 host
    Elasticsearch 7.17.2 cluster with https enabled
    Elasticsearch Template 6.0

    Last week I performed an upgrade from 5.4 to 6.0.2. Prior to the upgrade elasticsearch was being monitored just fine. However after the upgrade I saw that the proxy monitoring the ES cluster immediately started showing a massive preprocessing manager queue. Also all agents that were sending data to that proxy would start being marked as offline. After a bit of troubleshooting, mostly poking in the dark because it was late and I was tired, I found that disabling the Elasticsearch host and restarting the proxy would clear the preprocessing manager queue and the other monitored items would come back.
    At first I was on version 6.0.2 and thought that maybe it was a down-level template causing the problem, so I tried to upgrade the template to the 6.0 version. I ran into the issue where I could not import a new template with triggers, so I waited until today and upgraded to 6.0.3 after reviewing the release notes. Upgraded the template, and still no luck.
    I'm at a bit of a loss. If I enable debug logging 4 on either the proxy or the server I just get a massive scroll of data and don't know where to start filtering in order to get the source of the issue. I'll fully admit that I was a bit tired when I did the upgrade to 6.0 from 5.4. The database portion of the upgrade was relatively smooth but I'm worried that I goofed something up with the table upgrades.
    If anyone has any pointers that'd be fantastic.
  • XPystchrisX
    Junior Member
    • Sep 2013
    • 4

    #2
    If anyone finds this the fix was actually mentioned in the official discussion thread for the Elasticsearch template here: Discussion thread for official Zabbix Template for ElasticSearch - ZABBIX Forums
    In that thread there is mention of adjusting the template so that the following line "url: '{$ELASTICSEARCH.SCHEME}://{HOST.CONN}:{$ELASTICSEARCH.PORT}/_nodes/stats" is limited to just the jvm,indices,fs counters. I did this and immediately my proxies were able to poll data from the Elasticsearch nodes without issues. I then adjusted the query to the following so that all of the items in the template would have data available.
    url: '{$ELASTICSEARCH.SCHEME}://{HOST.CONN}:{$ELASTICSEARCH.PORT}/_nodes/stats/jvm,indices,fs,http,thread_pool
    I'm now getting no preprocessing manager queue nor am I having issues with agents timing out.

    Comment

    • eyewing
      Junior Member
      • Apr 2022
      • 2

      #3
      Same issue with Zabbix 6.0.3. Elasticsearch 7.17.0. Workaround with

      PHP Code:
      url'{$ELASTICSEARCH.SCHEME}://{HOST.CONN}:{$ELASTICSEARCH.PORT}/_nodes/stats/jvm,indices,fs,http,thread_pool 
      not working for us. JSON reply from elastic is 6.1Mb.

      Click image for larger version

Name:	Screenshot 2022-04-12 at 19.28.01.png
Views:	389
Size:	27.1 KB
ID:	443069
      Preprocessing module uses one CPU core just after proxy restart. Agents pinned to this proxy went offline.
      Can anyone get us point for this problem?​

      Comment

      • eyewing
        Junior Member
        • Apr 2022
        • 2

        #4
        Increasing grab interval to 3 minutes fixed the problem.
        Last edited by eyewing; 14-04-2022, 10:33.

        Comment

        Working...