Ad Widget

Collapse

Too many long running queries in Zabbix PG "SELECT h.* FROM history..."

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • g8a
    Junior Member
    • May 2026
    • 2

    #1

    Too many long running queries in Zabbix PG "SELECT h.* FROM history..."

    Hello!
    I'm experiencing a lot of long running queries on our zabbix postgresql, and they come from Grafana dashboards (it's the main assumption - actually the DB overload could be caused by Grafana Direct DB Connection bypassing the API and querying history tables directly. Reducing pm.max_children helps with RAM but does not stop the DB flood)

    seconds_running | table_name | application_name | query_preview
    -----------------+--------------+------------------+--------------------------------------------------------------
    57249 | history_uint | | SELECT h.* FROM history_uint h WHERE h.itemid IN (XXXX)
    57249 | history | | SELECT h.* FROM history h WHERE h.itemid IN (XXXX)

    Tables affected are "history_uint" and "history" as you can see, and this is affecting our environment because Grafana dashboards never load information.


    psql (PostgreSQL) 13.23
    zabbix_server (Zabbix) 7.0.21

    We have tried restarting, scaling the server ram, upgrading the pm.max_children from 50 to 180...

    All of these still not affecting and queries will still take too much to run.

    Any ideas how to troubleshoot this? this happens since last Azure maintenance (last weekend, they restarted the zabbix db)

    Thanks in advance.
    Last edited by g8a; Yesterday, 10:38.
  • guntis_liepins
    Member
    • Oct 2025
    • 61

    #2
    This query is insane - it is not limited in time and size. Something seriously bad and wrong is done with Grafana dashboard.
    These queries mean that grafana dashboard for some reason reads ALL history for included items...
    You must fix queries for Grafana dashboard.
    There are some other options like partitioning , TimescaleDB and reducing size of history tables, but with queries which go multiple times over all history tables - it will not help.

    Comment

    • g8a
      Junior Member
      • May 2026
      • 2

      #3
      Originally posted by guntis_liepins
      This query is insane - it is not limited in time and size. Something seriously bad and wrong is done with Grafana dashboard.
      These queries mean that grafana dashboard for some reason reads ALL history for included items...
      You must fix queries for Grafana dashboard.
      There are some other options like partitioning , TimescaleDB and reducing size of history tables, but with queries which go multiple times over all history tables - it will not help.
      Thanks for your answer.

      The problem is we are using zabbix data source, and not the db one, so we can't customize the db query.

      Comment

      • ZabbixSeeker
        Junior Member
        • Mar 2026
        • 16

        #4
        There's an official grafana-plugin and/or grafana-zabbix-integration somewhere. Use that instead of manual SQL queries.

        Comment

        • cyber
          Senior Member
          Zabbix Certified SpecialistZabbix Certified Professional
          • Dec 2006
          • 4914

          #5
          I have just exact same case ..
          Some API calls (what this grafana zabbix plugin is also using) manage to run for hours... Which in the end creates locks in DB, which affects other things like compression jobs for timescale etc...
          In my case I was able to trace it to some specific script items and I killed those, I will see tomorrow how the db behaves...

          that query itself is not even a huge one..: API call for it is just as example of history.get from docs...
          Code:
          {
          "jsonrpc": "2.0",
          "method": "history.get",
          "params": {
          "output": "extend",
          "history": 0,
          "itemids": "23296",
          "sortfield": "clock",
          "sortorder": "DESC",
          "limit": 10
          },
          "id": 1
          }
          we just had a bit more items (23) in list and limit is also 23... basically finding last values for 23 items...
          this is the resulting query...
          SELECT h.* FROM history_uint h WHERE h.itemid IN (26342018,26389356,26389725,26395776,26397646,2640 0455,26401893,26409391,26412250,26414105,26414294, 26420466,26423043,26425512,26427469,26429200,26431 452,26432650,26434521,26437200,26440663,26443329,2 6445753) AND h.clock>=1777442456 ORDER BY h.clock DESC LIMIT 23
          But, if you are not using time_from and time_till parameters here, then obviously it will do full scan of hisotry for thse values... that "clock" value is taken as 14 days ago, which is as long as we keep history...
          And in some reason this query manages to run hours... funny enough, there is actually no values in history, as this specific monitoring has been broken for a while, items are all "not supported", as master item fails...

          Same happens if compression is used or not used in DB. Without compression effects of those queries locking are not so big...
          zabbix version has been 7.0.22-25... I managed to relocate my db during this time from PG 14 to PG 16 ... the webserver changed from apache to nginx... no difference in behaviour...

          So as far as I can tell, it is not exactly grafana plugin problem, but API call issue..

          g8a whats the query you used to display those longrunning queries?


          Originally posted by ZabbixSeeker
          There's an official grafana-plugin and/or grafana-zabbix-integration somewhere. Use that instead of manual SQL queries.
          Exactly what OP is doing... Said it just in last post "The problem is we are using zabbix data source, and not the db one, so we can't customize the db query."
          Last edited by cyber; Today, 15:08.

          Comment

          Working...