Elasticsearch

Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. Elasticsearch is developed in Java.

Available solutions




This template is for Zabbix version: 7.0

Source: https://git.zabbix.com/projects/ZBX/repos/zabbix/browse/templates/app/elasticsearch_http?at=release/7.0

Elasticsearch Cluster by HTTP

Overview

The template to monitor Elasticsearch by Zabbix that work without any external scripts. It works with both standalone and cluster instances. The metrics are collected in one pass remotely using an HTTP agent. They are getting values from REST API _cluster/health, _cluster/stats, _nodes/stats requests.

Requirements

Zabbix version: 7.0 and higher.

Tested versions

This template has been tested on:

  • Elasticsearch 6.5, 7.6

Configuration

Zabbix should be configured according to the instructions in the Templates out of the box section.

Setup

  1. Set the hostname or IP address of the Elasticsearch host in the {$ELASTICSEARCH.HOST} macro.

  2. Set the login and password in the {$ELASTICSEARCH.USERNAME} and {$ELASTICSEARCH.PASSWORD} macros.

  3. If you use an atypical location of ES API, don't forget to change the macros {$ELASTICSEARCH.SCHEME},{$ELASTICSEARCH.PORT}.

Macros used

Name Description Default
{$ELASTICSEARCH.USERNAME}

The username of the Elasticsearch.

{$ELASTICSEARCH.PASSWORD}

The password of the Elasticsearch.

{$ELASTICSEARCH.HOST}

The hostname or IP address of the Elasticsearch host.

<SET ELASTICSEARCH HOST>
{$ELASTICSEARCH.PORT}

The port of the Elasticsearch host.

9200
{$ELASTICSEARCH.SCHEME}

The scheme of the Elasticsearch (http/https).

http
{$ELASTICSEARCH.RESPONSE_TIME.MAX.WARN}

The ES cluster maximum response time in seconds for trigger expression.

10s
{$ELASTICSEARCH.QUERY_LATENCY.MAX.WARN}

Maximum of query latency in milliseconds for trigger expression.

100
{$ELASTICSEARCH.FETCH_LATENCY.MAX.WARN}

Maximum of fetch latency in milliseconds for trigger expression.

100
{$ELASTICSEARCH.INDEXING_LATENCY.MAX.WARN}

Maximum of indexing latency in milliseconds for trigger expression.

100
{$ELASTICSEARCH.FLUSH_LATENCY.MAX.WARN}

Maximum of flush latency in milliseconds for trigger expression.

100
{$ELASTICSEARCH.HEAP_USED.MAX.WARN}

The maximum percent in the use of JVM heap for warning trigger expression.

85
{$ELASTICSEARCH.HEAP_USED.MAX.CRIT}

The maximum percent in the use of JVM heap for critically trigger expression.

95

Items

Name Description Type Key and additional info
Service status

Checks if the service is running and accepting TCP connections.

Simple check net.tcp.service["{$ELASTICSEARCH.SCHEME}","{$ELASTICSEARCH.HOST}","{$ELASTICSEARCH.PORT}"]

Preprocessing

  • Discard unchanged with heartbeat: 10m

Service response time

Checks performance of the TCP service.

Simple check net.tcp.service.perf["{$ELASTICSEARCH.SCHEME}","{$ELASTICSEARCH.HOST}","{$ELASTICSEARCH.PORT}"]
Get cluster health

Returns the health status of a cluster.

HTTP agent es.cluster.get_health
Cluster health status

Health status of the cluster, based on the state of its primary and replica shards. Statuses are:

green

All shards are assigned.

yellow

All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.

red

One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.

Dependent item es.cluster.status

Preprocessing

  • JSON Path: $.status

  • JavaScript: The text is too long. Please see the template.

  • Discard unchanged with heartbeat: 1h

Number of nodes

The number of nodes within the cluster.

Dependent item es.cluster.number_of_nodes

Preprocessing

  • JSON Path: $.number_of_nodes

  • Discard unchanged with heartbeat: 1h

Number of data nodes

The number of nodes that are dedicated to data nodes.

Dependent item es.cluster.number_of_data_nodes

Preprocessing

  • JSON Path: $.number_of_data_nodes

  • Discard unchanged with heartbeat: 1h

Number of relocating shards

The number of shards that are under relocation.

Dependent item es.cluster.relocating_shards

Preprocessing

  • JSON Path: $.relocating_shards

Number of initializing shards

The number of shards that are under initialization.

Dependent item es.cluster.initializing_shards

Preprocessing

  • JSON Path: $.initializing_shards

Number of unassigned shards

The number of shards that are not allocated.

Dependent item es.cluster.unassigned_shards

Preprocessing

  • JSON Path: $.unassigned_shards

Delayed unassigned shards

The number of shards whose allocation has been delayed by the timeout settings.

Dependent item es.cluster.delayed_unassigned_shards

Preprocessing

  • JSON Path: $.delayed_unassigned_shards

Number of pending tasks

The number of cluster-level changes that have not yet been executed.

Dependent item es.cluster.number_of_pending_tasks

Preprocessing

  • JSON Path: $.number_of_pending_tasks

Task max waiting in queue

The time expressed in seconds since the earliest initiated task is waiting for being performed.

Dependent item es.cluster.task_max_waiting_in_queue

Preprocessing

  • JSON Path: $.task_max_waiting_in_queue_millis

  • Custom multiplier: 0.001

Inactive shards percentage

The ratio of inactive shards in the cluster expressed as a percentage.

Dependent item es.cluster.inactive_shards_percent_as_number

Preprocessing

  • JSON Path: $.active_shards_percent_as_number

  • JavaScript: The text is too long. Please see the template.

Get cluster stats

Returns cluster statistics.

HTTP agent es.cluster.get_stats
Cluster uptime

Uptime duration in seconds since JVM has last started.

Dependent item es.nodes.jvm.max_uptime

Preprocessing

  • JSON Path: $.nodes.jvm.max_uptime_in_millis

  • Custom multiplier: 0.001

Number of non-deleted documents

The total number of non-deleted documents across all primary shards assigned to the selected nodes.

This number is based on the documents in Lucene segments and may include the documents from nested fields.

Dependent item es.indices.docs.count

Preprocessing

  • JSON Path: $.indices.docs.count

  • Discard unchanged with heartbeat: 1h

Indices with shards assigned to nodes

The total number of indices with shards assigned to the selected nodes.

Dependent item es.indices.count

Preprocessing

  • JSON Path: $.indices.count

  • Discard unchanged with heartbeat: 1h

Total size of all file stores

The total size in bytes of all file stores across all selected nodes.

Dependent item es.nodes.fs.total_in_bytes

Preprocessing

  • JSON Path: $.nodes.fs.total_in_bytes

  • Discard unchanged with heartbeat: 1h

Total available size to JVM in all file stores

The total number of bytes available to JVM in the file stores across all selected nodes.

Depending on OS or process-level restrictions, this number may be less than nodes.fs.free_in_byes.

This is the actual amount of free disk space the selected Elasticsearch nodes can use.

Dependent item es.nodes.fs.available_in_bytes

Preprocessing

  • JSON Path: $.nodes.fs.available_in_bytes

  • Discard unchanged with heartbeat: 1h

Nodes with the data role

The number of selected nodes with the data role.

Dependent item es.nodes.count.data

Preprocessing

  • JSON Path: $.nodes.count.data

  • Discard unchanged with heartbeat: 1h

Nodes with the ingest role

The number of selected nodes with the ingest role.

Dependent item es.nodes.count.ingest

Preprocessing

  • JSON Path: $.nodes.count.ingest

  • Discard unchanged with heartbeat: 1h

Nodes with the master role

The number of selected nodes with the master role.

Dependent item es.nodes.count.master

Preprocessing

  • JSON Path: $.nodes.count.master

  • Discard unchanged with heartbeat: 1h

Get nodes stats

Returns cluster nodes statistics.

HTTP agent es.nodes.get_stats

Triggers

Name Description Expression Severity Dependencies and additional info
Elasticsearch: Service is down

The service is unavailable or does not accept TCP connections.

last(/Elasticsearch Cluster by HTTP/net.tcp.service["{$ELASTICSEARCH.SCHEME}","{$ELASTICSEARCH.HOST}","{$ELASTICSEARCH.PORT}"])=0 Average Manual close: Yes
Elasticsearch: Service response time is too high

The performance of the TCP service is very low.

min(/Elasticsearch Cluster by HTTP/net.tcp.service.perf["{$ELASTICSEARCH.SCHEME}","{$ELASTICSEARCH.HOST}","{$ELASTICSEARCH.PORT}"],5m)>{$ELASTICSEARCH.RESPONSE_TIME.MAX.WARN} Warning Manual close: Yes
Depends on:
  • Elasticsearch: Service is down
Elasticsearch: Health is YELLOW

All primary shards are assigned, but one or more replica shards are unassigned.
If a node in the cluster fails, some data could be unavailable until that node is repaired.

last(/Elasticsearch Cluster by HTTP/es.cluster.status)=1 Average
Elasticsearch: Health is RED

One or more primary shards are unassigned, so some data is unavailable.
This can occur briefly during cluster startup as primary shards are assigned.

last(/Elasticsearch Cluster by HTTP/es.cluster.status)=2 High
Elasticsearch: Health is UNKNOWN

The health status of the cluster is unknown or cannot be obtained.

last(/Elasticsearch Cluster by HTTP/es.cluster.status)=255 High
Elasticsearch: The number of nodes within the cluster has decreased change(/Elasticsearch Cluster by HTTP/es.cluster.number_of_nodes)<0 Info Manual close: Yes
Elasticsearch: The number of nodes within the cluster has increased change(/Elasticsearch Cluster by HTTP/es.cluster.number_of_nodes)>0 Info Manual close: Yes
Elasticsearch: Cluster has the initializing shards

The cluster has the initializing shards longer than 10 minutes.

min(/Elasticsearch Cluster by HTTP/es.cluster.initializing_shards,10m)>0 Average
Elasticsearch: Cluster has the unassigned shards

The cluster has the unassigned shards longer than 10 minutes.

min(/Elasticsearch Cluster by HTTP/es.cluster.unassigned_shards,10m)>0 Average
Elasticsearch: Cluster has been restarted

Uptime is less than 10 minutes.

last(/Elasticsearch Cluster by HTTP/es.nodes.jvm.max_uptime)<10m Info Manual close: Yes
Elasticsearch: Cluster does not have enough space for resharding

There is not enough disk space for index resharding.

(last(/Elasticsearch Cluster by HTTP/es.nodes.fs.total_in_bytes)-last(/Elasticsearch Cluster by HTTP/es.nodes.fs.available_in_bytes))/(last(/Elasticsearch Cluster by HTTP/es.cluster.number_of_data_nodes)-1)>last(/Elasticsearch Cluster by HTTP/es.nodes.fs.available_in_bytes) High
Elasticsearch: Cluster has only two master nodes

The cluster has only two nodes with a master role and will be unavailable if one of them breaks.

last(/Elasticsearch Cluster by HTTP/es.nodes.count.master)=2 Disaster

LLD rule Cluster nodes discovery

Name Description Type Key and additional info
Cluster nodes discovery

Discovery ES cluster nodes.

HTTP agent es.nodes.discovery

Preprocessing

  • JSON Path: $.nodes.[*]

  • Discard unchanged with heartbeat: 1d

Item prototypes for Cluster nodes discovery

Name Description Type Key and additional info
ES {#ES.NODE}: Get data

Returns cluster nodes statistics.

Dependent item es.node.get.data[{#ES.NODE}]

Preprocessing

  • JSON Path: $..[?(@.name=='{#ES.NODE}')].first()

ES {#ES.NODE}: Total size

Total size (in bytes) of all file stores.

Dependent item es.node.fs.total.total_in_bytes[{#ES.NODE}]

Preprocessing

  • JSON Path: $..fs.total.total_in_bytes.first()

  • Discard unchanged with heartbeat: 1d

ES {#ES.NODE}: Total available size

The total number of bytes available to this Java virtual machine on all file stores.

Depending on OS or process level restrictions, this might appear less than fs.total.free_in_bytes.

This is the actual amount of free disk space the Elasticsearch node can utilize.

Dependent item es.node.fs.total.available_in_bytes[{#ES.NODE}]

Preprocessing

  • JSON Path: $..fs.total.available_in_bytes.first()

  • Discard unchanged with heartbeat: 1h

ES {#ES.NODE}: Node uptime

JVM uptime in seconds.

Dependent item es.node.jvm.uptime[{#ES.NODE}]

Preprocessing

  • JSON Path: $..jvm.uptime_in_millis.first()

  • Custom multiplier: 0.001

ES {#ES.NODE}: Maximum JVM memory available for use

The maximum amount of memory, in bytes, available for use by the heap.

Dependent item es.node.jvm.mem.heap_max_in_bytes[{#ES.NODE}]

Preprocessing

  • JSON Path: $..jvm.mem.heap_max_in_bytes.first()

  • Discard unchanged with heartbeat: 1d

ES {#ES.NODE}: Amount of JVM heap currently in use

The memory, in bytes, currently in use by the heap.

Dependent item es.node.jvm.mem.heap_used_in_bytes[{#ES.NODE}]

Preprocessing

  • JSON Path: $..jvm.mem.heap_used_in_bytes.first()

  • Discard unchanged with heartbeat: 1h

ES {#ES.NODE}: Percent of JVM heap currently in use

The percentage of memory currently in use by the heap.

Dependent item es.node.jvm.mem.heap_used_percent[{#ES.NODE}]

Preprocessing

  • JSON Path: $..jvm.mem.heap_used_percent.first()

  • Discard unchanged with heartbeat: 1h

ES {#ES.NODE}: Amount of JVM heap committed

The amount of memory, in bytes, available for use by the heap.

Dependent item es.node.jvm.mem.heap_committed_in_bytes[{#ES.NODE}]

Preprocessing

  • JSON Path: $..jvm.mem.heap_committed_in_bytes.first()

  • Discard unchanged with heartbeat: 1h

ES {#ES.NODE}: Number of open HTTP connections

The number of currently open HTTP connections for the node.

Dependent item es.node.http.current_open[{#ES.NODE}]

Preprocessing

  • JSON Path: $..http.current_open.first()

  • Discard unchanged with heartbeat: 1h

ES {#ES.NODE}: Rate of HTTP connections opened

The number of HTTP connections opened for the node per second.

Dependent item es.node.http.opened.rate[{#ES.NODE}]

Preprocessing

  • JSON Path: $..http.total_opened.first()

  • Change per second
ES {#ES.NODE}: Time spent throttling operations

Time in seconds spent throttling operations for the last measuring span.

Dependent item es.node.indices.indexing.throttle_time[{#ES.NODE}]

Preprocessing

  • JSON Path: $..indices.indexing.throttle_time_in_millis.first()

  • Custom multiplier: 0.001

  • Simple change
ES {#ES.NODE}: Time spent throttling recovery operations

Time in seconds spent throttling recovery operations for the last measuring span.

Dependent item es.node.indices.recovery.throttle_time[{#ES.NODE}]

Preprocessing

  • JSON Path: $..indices.recovery.throttle_time_in_millis.first()

  • Custom multiplier: 0.001

  • Simple change
ES {#ES.NODE}: Time spent throttling merge operations

Time in seconds spent throttling merge operations for the last measuring span.

Dependent item es.node.indices.merges.total_throttled_time[{#ES.NODE}]

Preprocessing

  • JSON Path: $..indices.merges.total_throttled_time_in_millis.first()

  • Custom multiplier: 0.001

  • Simple change
ES {#ES.NODE}: Rate of queries

The number of query operations per second.

Dependent item es.node.indices.search.query.rate[{#ES.NODE}]

Preprocessing

  • JSON Path: $..indices.search.query_total.first()

  • Change per second
ES {#ES.NODE}: Total number of query

The total number of query operations.

Dependent item es.node.indices.search.query_total[{#ES.NODE}]

Preprocessing

  • JSON Path: $..indices.search.query_total.first()

  • Discard unchanged with heartbeat: 1h

ES {#ES.NODE}: Time spent performing query

Time in seconds spent performing query operations for the last measuring span.

Dependent item es.node.indices.search.query_time[{#ES.NODE}]

Preprocessing

  • JSON Path: $..indices.search.query_time_in_millis.first()

  • Custom multiplier: 0.001

  • Simple change
ES {#ES.NODE}: Total time spent performing query

Time in milliseconds spent performing query operations.

Dependent item es.node.indices.search.query_time_in_millis[{#ES.NODE}]

Preprocessing

  • JSON Path: $..indices.search.query_time_in_millis.first()

  • Discard unchanged with heartbeat: 1h

ES {#ES.NODE}: Query latency

The average query latency calculated by sampling the total number of queries and the total elapsed time at regular intervals.

Calculated es.node.indices.search.query_latency[{#ES.NODE}]
ES {#ES.NODE}: Current query operations

The number of query operations currently running.

Dependent item es.node.indices.search.query_current[{#ES.NODE}]

Preprocessing

  • JSON Path: $..indices.search.query_current.first()

ES {#ES.NODE}: Rate of fetch

The number of fetch operations per second.

Dependent item es.node.indices.search.fetch.rate[{#ES.NODE}]

Preprocessing

  • JSON Path: $..indices.search.fetch_total.first()

  • Change per second
ES {#ES.NODE}: Total number of fetch

The total number of fetch operations.

Dependent item es.node.indices.search.fetch_total[{#ES.NODE}]

Preprocessing

  • JSON Path: $..indices.search.fetch_total.first()

  • Discard unchanged with heartbeat: 1h

ES {#ES.NODE}: Time spent performing fetch

Time in seconds spent performing fetch operations for the last measuring span.

Dependent item es.node.indices.search.fetch_time[{#ES.NODE}]

Preprocessing

  • JSON Path: $..indices.search.fetch_time_in_millis.first()

  • Custom multiplier: 0.001

  • Simple change
ES {#ES.NODE}: Total time spent performing fetch

Time in milliseconds spent performing fetch operations.

Dependent item es.node.indices.search.fetch_time_in_millis[{#ES.NODE}]

Preprocessing

  • JSON Path: $..indices.search.fetch_time_in_millis.first()

  • Discard unchanged with heartbeat: 1h

ES {#ES.NODE}: Fetch latency

The average fetch latency calculated by sampling the total number of fetches and the total elapsed time at regular intervals.

Calculated es.node.indices.search.fetch_latency[{#ES.NODE}]
ES {#ES.NODE}: Current fetch operations

The number of fetch operations currently running.

Dependent item es.node.indices.search.fetch_current[{#ES.NODE}]

Preprocessing

  • JSON Path: $..indices.search.fetch_current.first()

ES {#ES.NODE}: Write thread pool executor tasks completed

The number of tasks completed by the write thread pool executor.

Dependent item es.node.thread_pool.write.completed.rate[{#ES.NODE}]

Preprocessing

  • JSON Path: $..thread_pool.write.completed.first()

  • Change per second
ES {#ES.NODE}: Write thread pool active threads

The number of active threads in the write thread pool.

Dependent item es.node.thread_pool.write.active[{#ES.NODE}]

Preprocessing

  • JSON Path: $..thread_pool.write.active.first()

ES {#ES.NODE}: Write thread pool tasks in queue

The number of tasks in queue for the write thread pool.

Dependent item es.node.thread_pool.write.queue[{#ES.NODE}]

Preprocessing

  • JSON Path: $..thread_pool.write.queue.first()

ES {#ES.NODE}: Write thread pool executor tasks rejected

The number of tasks rejected by the write thread pool executor.

Dependent item es.node.thread_pool.write.rejected.rate[{#ES.NODE}]

Preprocessing

  • JSON Path: $..thread_pool.write.rejected.first()

  • Change per second
ES {#ES.NODE}: Search thread pool executor tasks completed

The number of tasks completed by the search thread pool executor.

Dependent item es.node.thread_pool.search.completed.rate[{#ES.NODE}]

Preprocessing

  • JSON Path: $..thread_pool.search.completed.first()

  • Change per second
ES {#ES.NODE}: Search thread pool active threads

The number of active threads in the search thread pool.

Dependent item es.node.thread_pool.search.active[{#ES.NODE}]

Preprocessing

  • JSON Path: $..thread_pool.search.active.first()

ES {#ES.NODE}: Search thread pool tasks in queue

The number of tasks in queue for the search thread pool.

Dependent item es.node.thread_pool.search.queue[{#ES.NODE}]

Preprocessing

  • JSON Path: $..thread_pool.search.queue.first()

ES {#ES.NODE}: Search thread pool executor tasks rejected

The number of tasks rejected by the search thread pool executor.

Dependent item es.node.thread_pool.search.rejected.rate[{#ES.NODE}]

Preprocessing

  • JSON Path: $..thread_pool.search.rejected.first()

  • Change per second
ES {#ES.NODE}: Refresh thread pool executor tasks completed

The number of tasks completed by the refresh thread pool executor.

Dependent item es.node.thread_pool.refresh.completed.rate[{#ES.NODE}]

Preprocessing

  • JSON Path: $..thread_pool.refresh.completed.first()

  • Change per second
ES {#ES.NODE}: Refresh thread pool active threads

The number of active threads in the refresh thread pool.

Dependent item es.node.thread_pool.refresh.active[{#ES.NODE}]

Preprocessing

  • JSON Path: $..thread_pool.refresh.active.first()

ES {#ES.NODE}: Refresh thread pool tasks in queue

The number of tasks in queue for the refresh thread pool.

Dependent item es.node.thread_pool.refresh.queue[{#ES.NODE}]

Preprocessing

  • JSON Path: $..thread_pool.refresh.queue.first()

ES {#ES.NODE}: Refresh thread pool executor tasks rejected

The number of tasks rejected by the refresh thread pool executor.

Dependent item es.node.thread_pool.refresh.rejected.rate[{#ES.NODE}]

Preprocessing

  • JSON Path: $..thread_pool.refresh.rejected.first()

  • Change per second
ES {#ES.NODE}: Total number of indexing

The total number of indexing operations.

Dependent item es.node.indices.indexing.index_total[{#ES.NODE}]

Preprocessing

  • JSON Path: $..indices.indexing.index_total.first()

  • Discard unchanged with heartbeat: 1h

ES {#ES.NODE}: Total time spent performing indexing

Total time in milliseconds spent performing indexing operations.

Dependent item es.node.indices.indexing.index_time_in_millis[{#ES.NODE}]

Preprocessing

  • JSON Path: $..indices.indexing.index_time_in_millis.first()

  • Discard unchanged with heartbeat: 1h

ES {#ES.NODE}: Indexing latency

The average indexing latency calculated from the available index_total and index_time_in_millis metrics.

Calculated es.node.indices.indexing.index_latency[{#ES.NODE}]
ES {#ES.NODE}: Current indexing operations

The number of indexing operations currently running.

Dependent item es.node.indices.indexing.index_current[{#ES.NODE}]

Preprocessing

  • JSON Path: $..indices.indexing.index_current.first()

  • Discard unchanged with heartbeat: 1h

ES {#ES.NODE}: Total number of index flushes to disk

The total number of flush operations.

Dependent item es.node.indices.flush.total[{#ES.NODE}]

Preprocessing

  • JSON Path: $..indices.flush.total.first()

  • Discard unchanged with heartbeat: 1h

ES {#ES.NODE}: Total time spent on flushing indices to disk

Total time in milliseconds spent performing flush operations.

Dependent item es.node.indices.flush.total_time_in_millis[{#ES.NODE}]

Preprocessing

  • JSON Path: $..indices.flush.total_time_in_millis.first()

  • Discard unchanged with heartbeat: 1h

ES {#ES.NODE}: Flush latency

The average flush latency calculated from the available flush.total and flush.total_time_in_millis metrics.

Calculated es.node.indices.flush.latency[{#ES.NODE}]
ES {#ES.NODE}: Rate of index refreshes

The number of refresh operations per second.

Dependent item es.node.indices.refresh.rate[{#ES.NODE}]

Preprocessing

  • JSON Path: $..indices.refresh.total.first()

  • Change per second
ES {#ES.NODE}: Time spent performing refresh

Time in seconds spent performing refresh operations for the last measuring span.

Dependent item es.node.indices.refresh.time[{#ES.NODE}]

Preprocessing

  • JSON Path: $..indices.refresh.total_time_in_millis.first()

  • Custom multiplier: 0.001

  • Simple change

Trigger prototypes for Cluster nodes discovery

Name Description Expression Severity Dependencies and additional info
Elasticsearch: ES {#ES.NODE}: has been restarted

Uptime is less than 10 minutes.

last(/Elasticsearch Cluster by HTTP/es.node.jvm.uptime[{#ES.NODE}])<10m Info Manual close: Yes
Elasticsearch: ES {#ES.NODE}: Percent of JVM heap in use is high

This indicates that the rate of garbage collection isn't keeping up with the rate of garbage creation.
To address this problem, you can either increase your heap size (as long as it remains below the recommended
guidelines stated above), or scale out the cluster by adding more nodes.

min(/Elasticsearch Cluster by HTTP/es.node.jvm.mem.heap_used_percent[{#ES.NODE}],1h)>{$ELASTICSEARCH.HEAP_USED.MAX.WARN} Warning Depends on:
  • Elasticsearch: ES {#ES.NODE}: Percent of JVM heap in use is critical
Elasticsearch: ES {#ES.NODE}: Percent of JVM heap in use is critical

This indicates that the rate of garbage collection isn't keeping up with the rate of garbage creation.
To address this problem, you can either increase your heap size (as long as it remains below the recommended
guidelines stated above), or scale out the cluster by adding more nodes.

min(/Elasticsearch Cluster by HTTP/es.node.jvm.mem.heap_used_percent[{#ES.NODE}],1h)>{$ELASTICSEARCH.HEAP_USED.MAX.CRIT} High
Elasticsearch: ES {#ES.NODE}: Query latency is too high

If latency exceeds a threshold, look for potential resource bottlenecks, or investigate whether you need to optimize your queries.

min(/Elasticsearch Cluster by HTTP/es.node.indices.search.query_latency[{#ES.NODE}],5m)>{$ELASTICSEARCH.QUERY_LATENCY.MAX.WARN} Warning
Elasticsearch: ES {#ES.NODE}: Fetch latency is too high

The fetch phase should typically take much less time than the query phase. If you notice this metric consistently increasing,
this could indicate a problem with slow disks, enriching of documents (highlighting the relevant text in search results, etc.),
or requesting too many results.

min(/Elasticsearch Cluster by HTTP/es.node.indices.search.fetch_latency[{#ES.NODE}],5m)>{$ELASTICSEARCH.FETCH_LATENCY.MAX.WARN} Warning
Elasticsearch: ES {#ES.NODE}: Write thread pool executor has the rejected tasks

The number of tasks rejected by the write thread pool executor is over 0 for 5m.

min(/Elasticsearch Cluster by HTTP/es.node.thread_pool.write.rejected.rate[{#ES.NODE}],5m)>0 Warning
Elasticsearch: ES {#ES.NODE}: Search thread pool executor has the rejected tasks

The number of tasks rejected by the search thread pool executor is over 0 for 5m.

min(/Elasticsearch Cluster by HTTP/es.node.thread_pool.search.rejected.rate[{#ES.NODE}],5m)>0 Warning
Elasticsearch: ES {#ES.NODE}: Refresh thread pool executor has the rejected tasks

The number of tasks rejected by the refresh thread pool executor is over 0 for 5m.

min(/Elasticsearch Cluster by HTTP/es.node.thread_pool.refresh.rejected.rate[{#ES.NODE}],5m)>0 Warning
Elasticsearch: ES {#ES.NODE}: Indexing latency is too high

If the latency is increasing, it may indicate that you are indexing too many documents at the same time (Elasticsearch's documentation
recommends starting with a bulk indexing size of 5 to 15 megabytes and increasing slowly from there).

min(/Elasticsearch Cluster by HTTP/es.node.indices.indexing.index_latency[{#ES.NODE}],5m)>{$ELASTICSEARCH.INDEXING_LATENCY.MAX.WARN} Warning
Elasticsearch: ES {#ES.NODE}: Flush latency is too high

If you see this metric increasing steadily, it may indicate a problem with slow disks; this problem may escalate
and eventually prevent you from being able to add new information to your index.

min(/Elasticsearch Cluster by HTTP/es.node.indices.flush.latency[{#ES.NODE}],5m)>{$ELASTICSEARCH.FLUSH_LATENCY.MAX.WARN} Warning

Feedback

Please report any issues with the template at https://support.zabbix.com

You can also provide feedback, discuss the template, or ask for help at ZABBIX forums

Articles and documentation

+ Propose new article

Sie können die Integration nicht finden, die Sie benötigen?