Zabbix can store history data in Elasticsearch as an alternative to a relational database.
The support of Elasticsearch is currently experimental.
This guide covers the setup for Elasticsearch 7.X. If you're using a different version, some functionality may not work as intended.
The setup involves creating an Elasticsearch storage location for each value type, setting up preprocessing (if needed), and connecting Zabbix to your Elasticsearch instance.
Elasticsearch can store the following value types:
| Item value type | Database table | Elasticsearch type |
|---|---|---|
| Numeric (unsigned) | history_uint | uint |
| Numeric (float) | history | dbl |
| Character | history_str | str |
| Log | history_log | log |
| Text | history_text | text |
| Binary | history_bin | not supported by Zabbix |
| JSON | history_json | json |
If Elasticsearch isn't installed yet, refer to the official installation guide before proceeding.
To store history data in Elasticsearch, you need to:
Elasticsearch can store data in a single index per value type, or across multiple date-based indices. Both approaches are described below.
In this approach, all history data for a given value type is written to a single index (e.g., uint or text).
To create an index for the Numeric (unsigned) value type, send the following request (with /uint in the URL) to your Elasticsearch instance:
curl -X PUT \
http://localhost:9200/uint \
-H 'content-type:application/json' \
-d '{
"settings": {
"index": {
"number_of_replicas": 1,
"number_of_shards": 5
}
},
"mappings": {
"properties": {
"itemid": { "type": "long" },
"clock": { "format": "epoch_second", "type": "date" },
"value": { "type": "long" }
}
}
}'Elasticsearch will respond with a confirmation that the index was created:
Similar requests must be sent for each additional value type you want to store in Elasticsearch.
Mappings for all value types are available in the Zabbix source repository.
For example, to create an index for the Text value type:
curl -X PUT \
http://localhost:9200/text \
-H 'content-type:application/json' \
-d '{
"settings": {
"index": {
"number_of_replicas": 1,
"number_of_shards": 5
}
},
"mappings": {
"properties": {
"itemid": { "type": "long" },
"clock": { "format": "epoch_second", "type": "date" },
"value": {
"fields": {
"analyzed": { "index": true, "type": "text", "analyzer": "standard" }
},
"index": false,
"type": "text"
}
}
}
}'Unlike other value types, JSON values require additional processing before storage.
The index below uses separate fields for parsed and raw values, so an ingest pipeline is needed to parse each value as JSON and store it in the correct field.
To create an index for the JSON value type, send the following request (with /json in the URL) to your Elasticsearch instance.
curl -X PUT \
http://localhost:9200/json \
-H 'content-type:application/json' \
-d '{
"settings": {
"number_of_shards": 5,
"number_of_replicas": 1
},
"mappings": {
"dynamic": false,
"properties": {
"itemid": { "type": "long" },
"clock": { "type": "date", "format": "epoch_second" },
"ns": { "type": "long" },
"value_parsed": { "type": "flattened" },
"value_raw": { "type": "keyword", "ignore_above": 1000000 }
}
}
}'Then, create the ingest pipeline:
curl -X PUT \
http://localhost:9200/_ingest/pipeline/json \
-H 'content-type:application/json' \
-d '{
"processors": [
{
"json": {
"field": "value",
"target_field": "value_parsed",
"ignore_failure": true
}
},
{
"set": {
"if": "ctx.value_parsed == null",
"field": "value_raw",
"value": "{{{ value }}}"
}
}
],
"on_failure": [
{
"set": {
"field": "value_raw",
"value": "{{{ value }}}"
}
}
]
}'Elasticsearch will respond with a confirmation that the ingest pipeline was created:
Instead of writing all history data to a single index (e.g., uint), Elasticsearch can distribute this data across multiple date-based indices (e.g., uint-2026-01-01, uint-2026-01-02). This makes it easier to manage data volume and retention over time.
To enable this, you need to:
HistoryStorageDateIndex parameter in the Zabbix server configuration file—this enables storing values in multiple date-based indices.To create a template for the text index, send a request with the following details:
_template/text_template in the URL of your Elasticsearch instance."text*" in the "index_patterns" field to match the index name.text value type (see mappings in the Zabbix source repository).curl -X PUT \
http://localhost:9200/_template/text_template \
-H 'content-type:application/json' \
-d '{
"index_patterns": [ "text*" ],
"settings": {
"index": {
"number_of_replicas": 1,
"number_of_shards": 5
}
},
"mappings": {
"properties": {
"itemid": { "type": "long" },
"clock": { "format": "epoch_second", "type": "date" },
"value": {
"fields": {
"analyzed": { "index": true, "type": "text", "analyzer": "standard" }
},
"index": false,
"type": "text"
}
}
}
}'Template for the json index:
curl -X PUT \
http://localhost:9200/_template/json_template \
-H 'content-type:application/json' \
-d '{
"index_patterns": [ "json*" ],
"settings": {
"number_of_shards": 5,
"number_of_replicas": 1
},
"mappings": {
"dynamic": false,
"properties": {
"itemid": { "type": "long" },
"clock": { "type": "date", "format": "epoch_second" },
"ns": { "type": "long" },
"value_parsed": { "type": "flattened" },
"value_raw": { "type": "keyword", "ignore_above": 1000000 }
}
}
}'To create an ingest pipeline for the text index:
_ingest/pipeline/text-pipeline in the URL of your Elasticsearch instance.date_index_name processor to route each value to the correct date-based index based on its timestamp.curl -X PUT \
http://localhost:9200/_ingest/pipeline/text-pipeline \
-H 'content-type:application/json' \
-d '{
"description": "daily text index naming",
"processors": [
{
"date_index_name": {
"field": "clock",
"date_formats": ["UNIX"],
"index_name_prefix": "text-",
"date_rounding": "d"
}
}
]
}'For the json index, the pipeline also needs to parse the JSON value before routing it to the correct index:
curl -X PUT \
http://localhost:9200/_ingest/pipeline/json-pipeline \
-H 'content-type:application/json' \
-d '{
"description": "daily json index naming"
"processors": [
{
"json": {
"field": "value",
"target_field": "value_parsed",
"ignore_failure": true
}
},
{
"script": {
"source": "if (ctx.value_parsed == null || !(ctx.value_parsed instanceof Map)) { ctx.value_raw = ctx.value; ctx.remove(\"value_parsed\"); }"
}
},
{
"date_index_name": {
"field": "clock",
"date_formats": [ "UNIX" ],
"index_name_prefix": "json-",
"date_rounding": "d"
}
}
]
}'In your Zabbix server configuration file (zabbix_server.conf), set the following parameters:
HistoryStorageURL - the URL of your Elasticsearch instance.HistoryStorageTypes - comma-separated list of value types to store in Elasticsearch.For example, to store Character, Log, Text, and JSON type values in Elasticsearch (while keeping Numeric values in a database):
If you're using date-based indices for all values stored in Elasticsearch, also set the HistoryStorageDateIndex parameter:
After making changes, restart Zabbix server:
In your Zabbix frontend configuration file (zabbix.conf.php), declare $HISTORY as a global variable and set its url and types values to match the server configuration:
// Zabbix GUI configuration file.
global $DB, $HISTORY;
$HISTORY['url'] = 'http://localhost:9200';
$HISTORY['types'] = ['str', 'log', 'text', 'json'];The following steps may help you troubleshoot problems with your Elasticsearch setup:
auto_create_index is enabled:curl -X GET \
"http://localhost:9200/_cluster/settings?include_defaults=true&filter_path=**.auto_create_index"
# {"defaults": {"action": {"auto_create_index": "false"} } }To enable it, send the following request:
curl -X PUT \
http://localhost:9200/_cluster/settings \
-H 'content-type:application/json' \
-d '{
"persistent": {
"action.auto_create_index": "true"
}
}'
# {"acknowledged": true, "persistent": {"action": {"auto_create_index": "true"} }, "transient": {} }GET requests to their respective URLs:curl -X GET http://localhost:9200/json
curl -X GET http://localhost:9200/_template/json*
curl -X GET http://localhost:9200/_ingest/pipeline/json*You can compare received responses with expected responses in the Elasticsearch API documentation.
Check if any shards are in a failed state; restarting Elasticsearch may resolve this.
Verify that your Elasticsearch configuration allows access from Zabbix server and Zabbix frontend.
Use the LogSlowQueries Zabbix server configuration parameter to identify slow queries.
Check the Elasticsearch logs for errors.
If you need to reset your Elasticsearch setup and start over, you can delete all indices, templates, and ingest pipelines: