You are viewing documentation for the development version, it may be incomplete.
Join our translation project and help translate Zabbix documentation into your native language.

8 Elasticsearch setup

Zabbix can store history data in Elasticsearch as an alternative to a relational database.

The support of Elasticsearch is currently experimental.

This guide covers the setup for Elasticsearch 7.X. If you're using a different version, some functionality may not work as intended.

The setup involves creating an Elasticsearch storage location for each value type, setting up preprocessing (if needed), and connecting Zabbix to your Elasticsearch instance.

Elasticsearch can store the following value types:

Item value type Database table Elasticsearch type
Numeric (unsigned) history_uint uint
Numeric (float) history dbl
Character history_str str
Log history_log log
Text history_text text
Binary history_bin not supported by Zabbix
JSON history_json json

Important notes

  • Elasticsearch requires libcurl. See requirements for details.
  • The housekeeper does not delete data from Elasticsearch.
  • If all history data is stored in Elasticsearch, trends are not calculated or stored in the database. Consider extending the history storage period.
  • When Elasticsearch is used, range queries retrieving values from the database are limited by the timestamp of the data storage period.
  • Elasticsearch is not supported for Zabbix proxy; please use SQLite instead.

If Elasticsearch isn't installed yet, refer to the official installation guide before proceeding.

Configuring Elasticsearch

To store history data in Elasticsearch, you need to:

  • Create an index for each value type you want to store—this is where Elasticsearch stores the data, similar to a table in a relational database.
  • Define a mapping for each index—this defines the structure of the data, similar to a table schema.
  • Set up an ingest pipeline to process values before storage (required for JSON values and date-based indices).

Elasticsearch can store data in a single index per value type, or across multiple date-based indices. Both approaches are described below.

Storing history in a single index

In this approach, all history data for a given value type is written to a single index (e.g., uint or text).

To create an index for the Numeric (unsigned) value type, send the following request (with /uint in the URL) to your Elasticsearch instance:

curl -X PUT \
        http://localhost:9200/uint \
        -H 'content-type:application/json' \
        -d '{
            "settings": {
             "index": {
                "number_of_replicas": 1,
                "number_of_shards": 5
             }
          },
          "mappings": {
             "properties": {
                "itemid": { "type": "long" },
                "clock": { "format": "epoch_second", "type": "date" },
                "value": { "type": "long" }
             }
          }
       }'

Elasticsearch will respond with a confirmation that the index was created:

{"acknowledged": true, "shards_acknowledged": true, "index": "uint"}

Similar requests must be sent for each additional value type you want to store in Elasticsearch.

Mappings for all value types are available in the Zabbix source repository.

For example, to create an index for the Text value type:

curl -X PUT \
        http://localhost:9200/text \
        -H 'content-type:application/json' \
        -d '{
          "settings": {
             "index": {
                "number_of_replicas": 1,
                "number_of_shards": 5
             }
          },
          "mappings": {
             "properties": {
                "itemid": { "type": "long" },
                "clock": { "format": "epoch_second", "type": "date" },
                "value": {
                   "fields": {
                      "analyzed": { "index": true, "type": "text", "analyzer": "standard" }
                   },
                   "index": false,
                   "type": "text"
                }
             }
          }
       }'
JSON value type

Unlike other value types, JSON values require additional processing before storage.

The index below uses separate fields for parsed and raw values, so an ingest pipeline is needed to parse each value as JSON and store it in the correct field.

To create an index for the JSON value type, send the following request (with /json in the URL) to your Elasticsearch instance.

curl -X PUT \
        http://localhost:9200/json \
        -H 'content-type:application/json' \
        -d '{
          "settings": {
             "number_of_shards": 5,
             "number_of_replicas": 1
          },
          "mappings": {
             "dynamic": false,
             "properties": {
                "itemid": { "type": "long" },
                "clock": { "type": "date", "format": "epoch_second" },
                "ns": { "type": "long" },
                "value_parsed": { "type": "flattened" },
                "value_raw": { "type": "keyword", "ignore_above": 1000000 }
           }
         }
       }'

Then, create the ingest pipeline:

curl -X PUT \
        http://localhost:9200/_ingest/pipeline/json \
        -H 'content-type:application/json' \
        -d '{
          "processors": [
             {
                "json": {
                   "field": "value",
                   "target_field": "value_parsed",
                   "ignore_failure": true
                }
             },
             {
                "set": {
                   "if": "ctx.value_parsed == null",
                   "field": "value_raw",
                   "value": "{{{ value }}}"
                }
             }
          ],
          "on_failure": [
             {
                "set": {
                   "field": "value_raw",
                   "value": "{{{ value }}}"
                }
             }
          ]
       }'

Elasticsearch will respond with a confirmation that the ingest pipeline was created:

{"acknowledged": true}

Storing history in date-based indices

Instead of writing all history data to a single index (e.g., uint), Elasticsearch can distribute this data across multiple date-based indices (e.g., uint-2026-01-01, uint-2026-01-02). This makes it easier to manage data volume and retention over time.

To enable this, you need to:

  • Create an index template for each value type you want to store—this tells Elasticsearch what mapping to apply when it automatically creates a new date-based index.
  • Create an ingest pipeline for each value type—it processes each incoming value and routes it to the correct date-based index.
  • Configure the HistoryStorageDateIndex parameter in the Zabbix server configuration file—this enables storing values in multiple date-based indices.
Index templates

To create a template for the text index, send a request with the following details:

  • Use _template/text_template in the URL of your Elasticsearch instance.
  • Use "text*" in the "index_patterns" field to match the index name.
  • Use a mapping for the text value type (see mappings in the Zabbix source repository).
curl -X PUT \
        http://localhost:9200/_template/text_template \
        -H 'content-type:application/json' \
        -d '{
          "index_patterns": [ "text*" ],
          "settings": {
             "index": {
                "number_of_replicas": 1,
                "number_of_shards": 5
             }
          },
          "mappings": {
             "properties": {
                "itemid": { "type": "long" },
                "clock": { "format": "epoch_second", "type": "date" },
                "value": {
                   "fields": {
                      "analyzed": { "index": true, "type": "text", "analyzer": "standard" }
                   },
                   "index": false,
                   "type": "text"
                }
             }
          }
       }'

Template for the json index:

curl -X PUT \
        http://localhost:9200/_template/json_template \
        -H 'content-type:application/json' \
        -d '{
          "index_patterns": [ "json*" ],
          "settings": {
             "number_of_shards": 5,
             "number_of_replicas": 1
          },
          "mappings": {
             "dynamic": false,
             "properties": {
                "itemid": { "type": "long" },
                "clock": { "type": "date", "format": "epoch_second" },
                "ns": { "type": "long" },
                "value_parsed": { "type": "flattened" },
                "value_raw": { "type": "keyword", "ignore_above": 1000000 }
             }
          }
       }'
Ingest pipelines

To create an ingest pipeline for the text index:

  • Use _ingest/pipeline/text-pipeline in the URL of your Elasticsearch instance.
  • Include a date_index_name processor to route each value to the correct date-based index based on its timestamp.
curl -X PUT \
        http://localhost:9200/_ingest/pipeline/text-pipeline \
        -H 'content-type:application/json' \
        -d '{
          "description": "daily text index naming",
          "processors": [
             {
                "date_index_name": {
                   "field": "clock",
                   "date_formats": ["UNIX"],
                   "index_name_prefix": "text-",
                   "date_rounding": "d"
                }
             }
          ]
       }'

For the json index, the pipeline also needs to parse the JSON value before routing it to the correct index:

curl -X PUT \
        http://localhost:9200/_ingest/pipeline/json-pipeline \
        -H 'content-type:application/json' \
        -d '{
          "description": "daily json index naming"
          "processors": [
             {
                "json": {
                   "field": "value",
                   "target_field": "value_parsed",
                   "ignore_failure": true
                }
             },
             {
                "script": {
                   "source": "if (ctx.value_parsed == null || !(ctx.value_parsed instanceof Map)) { ctx.value_raw = ctx.value; ctx.remove(\"value_parsed\"); }"
                }
             },
             {
                "date_index_name": {
                   "field": "clock",
                   "date_formats": [ "UNIX" ],
                   "index_name_prefix": "json-",
                   "date_rounding": "d"
                }
             }
          ]
       }'

Configuring Zabbix server

In your Zabbix server configuration file (zabbix_server.conf), set the following parameters:

For example, to store Character, Log, Text, and JSON type values in Elasticsearch (while keeping Numeric values in a database):

HistoryStorageURL=http://localhost:9200
       HistoryStorageTypes=str,log,text,json

If you're using date-based indices for all values stored in Elasticsearch, also set the HistoryStorageDateIndex parameter:

HistoryStorageDateIndex=1

After making changes, restart Zabbix server:

systemctl restart zabbix-server

Configuring Zabbix frontend

In your Zabbix frontend configuration file (zabbix.conf.php), declare $HISTORY as a global variable and set its url and types values to match the server configuration:

// Zabbix GUI configuration file.
       global $DB, $HISTORY;
       
       $HISTORY['url']   = 'http://localhost:9200';
       $HISTORY['types'] = ['str', 'log', 'text', 'json'];

Troubleshooting

The following steps may help you troubleshoot problems with your Elasticsearch setup:

  1. Verify that auto_create_index is enabled:
curl -X GET \
        "http://localhost:9200/_cluster/settings?include_defaults=true&filter_path=**.auto_create_index"
       
       # {"defaults": {"action": {"auto_create_index": "false"} } }

To enable it, send the following request:

curl -X PUT \
        http://localhost:9200/_cluster/settings \
        -H 'content-type:application/json' \
        -d '{
          "persistent": {
             "action.auto_create_index": "true"
          }
       }'
       
       # {"acknowledged": true, "persistent": {"action": {"auto_create_index": "true"} }, "transient": {} }
  1. Verify that mappings, templates, and ingest pipelines are correct by sending GET requests to their respective URLs:
curl -X GET http://localhost:9200/json
       curl -X GET http://localhost:9200/_template/json*
       curl -X GET http://localhost:9200/_ingest/pipeline/json*

You can compare received responses with expected responses in the Elasticsearch API documentation.

  1. Check if any shards are in a failed state; restarting Elasticsearch may resolve this.

  2. Verify that your Elasticsearch configuration allows access from Zabbix server and Zabbix frontend.

  3. Use the LogSlowQueries Zabbix server configuration parameter to identify slow queries.

  4. Check the Elasticsearch logs for errors.

  5. If you need to reset your Elasticsearch setup and start over, you can delete all indices, templates, and ingest pipelines:

curl -X DELETE "http://localhost:9200/_all"
       curl -X DELETE "http://localhost:9200/_template/*"
       curl -X DELETE "http://localhost:9200/_ingest/pipeline/*"