Ad Widget

Collapse

Has anyone used Zabbix to scrape Node Exporter metrics before?

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • G0nz0uk
    Member
    • Apr 2021
    • 58

    #1

    Has anyone used Zabbix to scrape Node Exporter metrics before?

    Hello,

    We have a few 100 Linux devices running Node Exporter which we using with Grafana. I now want to get Zabbix to go to each node and scrape certain metrics from node exporter. It's all in prometheus format and I think Zabbix can convert to Json so it can read it better and create triggers of them.

    Has anyone done this before or know of a guide to try this please?

    Thanks
  • kyus
    Senior Member
    • Feb 2024
    • 191

    #2
    Hey!

    I've done something similar in order to monitor PostgreSQL exporter and also Kubernetes. The documentation is pretty helpful to get started: https://www.zabbix.com/documentation...pes/prometheus

    I've seen there are some community templates, but I didn't test them. I would recommend checking some template that already uses prometheus pattern, for example all the Kubernetes templates (especially the Kubelet one), they are useful to understand how to structure everything.

    Comment

    • G0nz0uk
      Member
      • Apr 2021
      • 58

      #3
      Thanks I'll check those out. I did find https://www.zabbix.com/documentation...pes/prometheus

      I got this far before posting and got lost with what to do here

      Click image for larger version

Name:	image.png
Views:	9
Size:	35.9 KB
ID:	511935

      And Pre-processing

      Click image for larger version

Name:	image.png
Views:	8
Size:	28.0 KB
ID:	511936

      Comment

      • G0nz0uk
        Member
        • Apr 2021
        • 58

        #4
        I did try this template https://git.zabbix.com/projects/ZBX/.../os/linux_prom

        However it says it's unsupported:

        says import failed:
        • Invalid tag "/zabbix_export/version": unsupported version number.
        We are on Zabbix 7.4.8 which was out on the 13/03/2026

        Comment

        • kyus
          Senior Member
          • Feb 2024
          • 191

          #5
          Usually I would go for:

          1. HTTP Item collecting all the metrics of an endpoint (this will be the master item), basically the item that you (but without any preprocessing).

          2. Then you create a Dependent Discovery Rule with the "Prometheus to JSON" preprocessing and also some LLD macros (can't really help here since i don't know what is displayed by prometheus) to create your items later on; you can also create filters if needed.

          3. Now you can create Dependent Item Prototypes in your newly created Dependent Discovery Rule. Those Dependent Item Prototypes will need the Prometheus Pattern preprocessing, and it is here that you will use your LLD macros.


          It's pretty confusing at first, so I'll try to give you an example of what I've setup to get some PostgreSQL metrics:


          Item configuration:

          Code:
          Name: Get PostgreSQL exporter metrics
          HTTP Item [B]without preprocessing[/B] (very important)

          Dependent Discovery rule:

          Code:
          Name: PG stat user tables discovery
          Type: Dependent Item
          Master Item: Get PostgreSQL exporter metrics
          Preprocessing: Prometheus to JSON -> pg_stat_user_tables_n_tup_del (this is the part of the metrics that contains values that I will use for LLD macros)
          LLD Macros:
               - {#DATNAME}          -> $.labels.datname
               - {#RELNAME}          -> $.labels.relname
               - {#SCHEMANAME}  -> $.labels.schemaname
          Filters: {#DATNAME} -> matches -> {$DATNAME.MATCHES}

          Dependent Item Prototype:

          Code:
          Name: Rows deleted on database: {#DATNAME}
          Type: Dependent Item
          Key: pg.stat_user_tables_n_tup_del["{#DATNAME}"]
          Master Item: Get PostgreSQL exporter metrics
          Preprocessing: 
               - Prometheus Pattern -> pg_stat_user_tables_n_tup_del{datname="{#DATNAME}"} -> sum
               - ( You may want to use Simple Change or Change per second in some cases. This will depend whether it is a counter or gauge, but basically, counter -> change per second (to get the rate); gauge -> no preprocessing, or in some cases simple change)
          Keep in mind that you'll probably need multiple Discovery rules with multiple items, depending on what you want to monitor.
          This setup basically uses the Discovery rule simply to get LLD Macros.

          I hope this can help you in some way!

          Comment

          Working...