Ad Widget

Collapse

SQL statement for

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • aborgeld
    Junior Member
    • Oct 2013
    • 4

    #1

    SQL statement for

    Hi guys,

    I want to use the following SQL statement on the Zabbix database.
    When i use a join to the history table (see dash) i get a problem. The query takes a long time to load (doesn't load)

    I don't know the model, do you have any tips?

    Code:
    SELECT DISTINCT 
      hosts.host AS RESOURCENAME,
      hosts.name AS RESOURCEKIND,
    #history.clock AS "TIMESTAMP",
      "host.name" AS IDENTKEY1,
      hosts.name AS IDENTVALUE1,
      "host.key" AS IDENTKEY2,
      hosts.hostid AS IDENTVALUE2,
      SUBSTRING_INDEX(items.key_, '.', -2) AS METRICNAME1,
      triggers.description, 
      functions.triggerid, 
      events.acknowledged, 
      triggers.value, 
      items.lastvalue
    FROM triggers
      INNER JOIN functions ON ( functions.triggerid = triggers.triggerid )
      INNER JOIN items ON ( items.itemid = functions.itemid )
    #INNER JOIN history ON ( items.itemid = history.itemid)
      INNER JOIN hosts ON ( items.hostid = hosts.hostid )
      INNER JOIN events ON ( events.objectid = triggers.triggerid )
    WHERE (events.eventid DIV 100000000000000)
      IN (0)
      AND (events.object-0)=0
      AND (triggers.value=1 OR (triggers.value =0 AND unix_timestamp(now()) -   triggers.lastchange <60))
      AND hosts.status =0
      AND items.status =0
      AND triggers.status =0
    GROUP BY functions.triggerid
    ORDER BY triggers.lastchange DESC
    limit 200
    Thanks,

    Kind regards,

    André
  • jan.garaj
    Senior Member
    Zabbix Certified Specialist
    • Jan 2010
    • 506

    #2
    Show your execution database plan. For MySQL it's SQL query:
    Code:
    EXPLAIN <YOUR_SQL>
    I'm attaching my plane from my test Zabbix environment (1 host only). Probably you have huge table of history, so DB has a problem.
    You can try to optimize (removing filesorts/temporary tables, creation better indexes, ...) your SQL regarding of info from execution plane - but this problem is out of domain of Zabbix forum.
    Attached Files
    Devops Monitoring Expert advice: Dockerize/automate/monitor all the things.
    My DevOps stack: Docker / Kubernetes / Mesos / ECS / Terraform / Elasticsearch / Zabbix / Grafana / Puppet / Ansible / Vagrant

    Comment

    • Pada
      Senior Member
      • Apr 2012
      • 236

      #3
      Try the following query:
      Code:
      SELECT DISTINCT 
        hosts.host AS RESOURCENAME,
      #history.clock AS "TIMESTAMP",
      h.clock as "TIMESTAMP",
        "host.key" AS IDENTKEY2,
        hosts.hostid AS IDENTVALUE2,
        SUBSTRING_INDEX(items.key_, '.', -2) AS METRICNAME1,
        triggers.description, 
        functions.triggerid, 
        events.acknowledged, 
        triggers.value
      FROM triggers
        INNER JOIN functions ON ( functions.triggerid = triggers.triggerid )
        INNER JOIN items ON ( items.itemid = functions.itemid )
      #INNER JOIN history ON ( items.itemid = history.itemid)
      INNER JOIN (select clock, itemid from history  group by itemid order by clock desc) h ON h.itemid = items.itemid
       
        INNER JOIN hosts ON ( items.hostid = hosts.hostid )
        INNER JOIN events ON ( events.objectid = triggers.triggerid )
      WHERE (events.eventid DIV 100000000000000)
        IN (0)
        AND (events.object-0)=0
        AND (triggers.value=1 OR (triggers.value =0 AND unix_timestamp(now()) -   triggers.lastchange <60))
        AND hosts.status =0
        AND items.status =0
        AND triggers.status =0
      GROUP BY functions.triggerid
      ORDER BY triggers.lastchange DESC
      limit 20;
      * Please take note that I removed some of the fields, because my Zabbix 1.8 installation does not have them.
      The above query resulted in "16 rows in set (12.77 sec)" when I ran it for the first time and then seconds later when I ran it again, it resulted in: "16 rows in set (3.69 sec)" ... due to InnoDB query caching that I've enabled.

      Here is how the "explain" results look for the query above:
      Code:
      +----+-------------+------------+--------+-------------------------------+-------------+---------+------------------------------+---------+-----------------------------------------------------------+
      | id | select_type | table      | type   | possible_keys                 | key         | key_len | ref                          | rows    | Extra                                                     |
      +----+-------------+------------+--------+-------------------------------+-------------+---------+------------------------------+---------+-----------------------------------------------------------+
      |  1 | PRIMARY     | <derived2> | ALL    | NULL                          | NULL        | NULL    | NULL                         |    2907 | Using temporary; Using filesort                           |
      |  1 | PRIMARY     | functions  | ref    | functions_1,functions_2       | functions_2 | 8       | h.itemid                     |       1 |                                                           |
      |  1 | PRIMARY     | triggers   | eq_ref | PRIMARY,triggers_1,triggers_2 | PRIMARY     | 8       | zabbix18.functions.triggerid |       1 | Using where                                               |
      |  1 | PRIMARY     | items      | eq_ref | PRIMARY,items_1,items_3       | PRIMARY     | 8       | h.itemid                     |       1 | Using where                                               |
      |  1 | PRIMARY     | hosts      | eq_ref | PRIMARY,hosts_2               | PRIMARY     | 8       | zabbix18.items.hostid        |       1 | Using where                                               |
      |  1 | PRIMARY     | events     | ALL    | NULL                          | NULL        | NULL    | NULL                         | 3722570 | Using where; Using join buffer                            |
      |  2 | DERIVED     | history    | range  | NULL                          | history_1   | 8       | NULL                         |      17 | Using index for group-by; Using temporary; Using filesort |
      +----+-------------+------------+--------+-------------------------------+-------------+---------+------------------------------+---------+-----------------------------------------------------------+
      7 rows in set (0.05 sec)
      vs. your original query, which takes forever on my DB:
      Code:
      +----+-------------+-----------+--------+-------------------------------+-------------+---------+-----------------------------+----------+----------------------------------------------+
      | id | select_type | table     | type   | possible_keys                 | key         | key_len | ref                         | rows     | Extra                                        |
      +----+-------------+-----------+--------+-------------------------------+-------------+---------+-----------------------------+----------+----------------------------------------------+
      |  1 | SIMPLE      | events    | ALL    | NULL                          | NULL        | NULL    | NULL                        |  3722578 | Using where; Using temporary; Using filesort |
      |  1 | SIMPLE      | triggers  | eq_ref | PRIMARY,triggers_1,triggers_2 | PRIMARY     | 8       | zabbix18.events.objectid    |        1 | Using where                                  |
      |  1 | SIMPLE      | functions | ref    | functions_1,functions_2       | functions_1 | 8       | zabbix18.triggers.triggerid |        1 | Using where                                  |
      |  1 | SIMPLE      | items     | eq_ref | PRIMARY,items_1,items_3       | PRIMARY     | 8       | zabbix18.functions.itemid   |        1 | Using where                                  |
      |  1 | SIMPLE      | hosts     | eq_ref | PRIMARY,hosts_2               | PRIMARY     | 8       | zabbix18.items.hostid       |        1 | Using where                                  |
      |  1 | SIMPLE      | history   | ref    | history_1                     | history_1   | 8       | zabbix18.functions.itemid   | 34259673 | Using index                                  |
      +----+-------------+-----------+--------+-------------------------------+-------------+---------+-----------------------------+----------+----------------------------------------------+
      6 rows in set (0.01 sec)
      These kind of queries are brand new to me - it took me ~2 hours to figure it out! I love challenges

      The issue wasn't the joining of the history table alone, but also the joining of the massive events table.
      Last edited by Pada; 25-10-2013, 00:25.

      Comment

      • aborgeld
        Junior Member
        • Oct 2013
        • 4

        #4
        query

        Thanks, but there is still one problem. When i join the history table (it still had the dashes), like now. It doesn't work.

        You can try this query. It takes forever again.

        SELECT DISTINCT
        hosts.host AS RESOURCENAME,
        history.clock AS "TIMESTAMP",
        h.clock as "TIMESTAMP",
        "host.key" AS IDENTKEY2,
        hosts.hostid AS IDENTVALUE2,
        SUBSTRING_INDEX(items.key_, '.', -2) AS METRICNAME1,
        triggers.description,
        functions.triggerid,
        events.acknowledged,
        triggers.value
        FROM triggers
        INNER JOIN functions ON ( functions.triggerid = triggers.triggerid )
        INNER JOIN items ON ( items.itemid = functions.itemid )
        INNER JOIN history ON ( items.itemid = history.itemid)
        INNER JOIN (select clock, itemid from history group by itemid order by clock desc) h ON h.itemid = items.itemid

        INNER JOIN hosts ON ( items.hostid = hosts.hostid )
        INNER JOIN events ON ( events.objectid = triggers.triggerid )
        WHERE (events.eventid DIV 100000000000000)
        IN (0)
        AND (events.object-0)=0
        AND (triggers.value=1 OR (triggers.value =0 AND unix_timestamp(now()) - triggers.lastchange <60))
        AND hosts.status =0
        AND items.status =0
        AND triggers.status =0
        GROUP BY functions.triggerid
        ORDER BY triggers.lastchange DESC
        limit 20;

        Kind regards,

        André

        Comment

        • aborgeld
          Junior Member
          • Oct 2013
          • 4

          #5
          events

          Your right, when i create the query without the events table it works with the history join. Now i'm wondering how to use the event and the history in the right way. I will test myself also.

          Comment

          • aborgeld
            Junior Member
            • Oct 2013
            • 4

            #6
            Solved

            I'm sorry, jou have joined history in another way. Thanks for all the information. It was very nice to hear why it took so long!
            This thread is closed

            Comment

            • jan.garaj
              Senior Member
              Zabbix Certified Specialist
              • Jan 2010
              • 506

              #7
              Problem: database engine has to read many records (see columns rows in explain):
              ~3722578 records from table events (without using indexes -bad)
              ~34259673 records from table history (see column key in explain)

              1.) Optimization of SQL query
              You can optimize your query for better performance:
              e.g. "AND (events.object-0)=0" try change (if is it possible) to "AND events.object=0"
              Also try to remove subselect (select clock, itemid from history group by itemid order by clock desc)
              Find any appropriate resource for MySQL SQL performance tuning on internet
              I'm not familiar with Zabbix DB model, so I'm not able to help you more - just general recommendations.

              2.) Optimization of MySQL server
              Tables events/history are InnoDB so reconfigure your MySQL server for best performance of InnoDB (bigger value for innodb_buffer_pool_size)
              You can use http://MySQLTuner.pl for analyze your current state and recommendations of MySQL server

              3.) Table partitioning
              Probably also partitioning will be fine - dividing huge table into "subtables" according to date. See documentation (MySQL, also Zabbix recommends it)
              Devops Monitoring Expert advice: Dockerize/automate/monitor all the things.
              My DevOps stack: Docker / Kubernetes / Mesos / ECS / Terraform / Elasticsearch / Zabbix / Grafana / Puppet / Ansible / Vagrant

              Comment

              • satyaprasad07
                Junior Member
                • Mar 2018
                • 2

                #8
                How do i get the remote command status = executed from zabbix action log

                Comment

                Working...