• /
  • EnglishEspañolFrançais日本語한국어Português
  • Inicia sesiónComenzar ahora

Monitor Elasticsearch on Kubernetes with OpenTelemetry

Monitor your Elasticsearch clusters in Kubernetes by deploying the OpenTelemetry Collector with automatic pod discovery. This integration uses the elasticsearchreceiver and receivercreator to automatically discover and monitor Elasticsearch pods without manual configuration.

To get started, select the collector distribution that best fits your Kubernetes environment:

You can choose between two collector distributions:

  • NRDOT: New Relic Distribution of OpenTelemetry
  • OTel Collector Contrib: Standard OpenTelemetry Collector with community-contributed components

Installation options

Choose the collector distribution that matches your needs:

Importante

NRDOT support for Elasticsearch Kubernetes monitoring is coming soon! Stay tuned for updates!

Before you begin

Before deploying the OTel Collector Contrib on Kubernetes, ensure you have:

Required access privileges:

  • Your New Relic
  • kubectl access to your Kubernetes cluster
  • Elasticsearch cluster admin privileges with monitor or manage cluster privilege (see Elasticsearch security privileges documentation for details)

System requirements:

  • Elasticsearch version 7.16 or higher - This integration requires a modern Elasticsearch cluster
  • Kubernetes cluster - A running Kubernetes cluster where Elasticsearch is deployed
  • Helm 3.0 or higher - Helm installed on your system
  • Network connectivity - Outbound HTTPS (port 443) to New Relic's OTLP ingest endpoint

Elasticsearch pod requirements:

  • Pod labels (Required) - Each Elasticsearch pod must have the label app: elasticsearch for automatic discovery to work. Without this label, the collector will not detect or monitor your pods.

Importante

How to add labels to Elasticsearch pods:

If you're using a StatefulSet or Deployment for Elasticsearch, add the label in the pod template:

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
spec:
template:
metadata:
labels:
app: elasticsearch # Required for auto-discovery
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:8.x.x

For existing pods without labels, update your StatefulSet/Deployment and restart the pods:

bash
$
kubectl label pods -l <your-existing-selector> app=elasticsearch -n <namespace>

You can verify labels are set correctly:

bash
$
kubectl get pods -n <namespace> --show-labels

Create Kubernetes secret for credentials

Create a Kubernetes secret to store your New Relic credentials securely:

  1. Create the namespace:
bash
$
kubectl create namespace newrelic
  1. Create the secret:
bash
$
kubectl create secret generic newrelic-licenses \
>
--from-literal=NEWRELIC_LICENSE_KEY=YOUR_LICENSE_KEY_HERE \
>
--from-literal=NEWRELIC_OTLP_ENDPOINT=https://otlp.nr-data.net:4318 \
>
--from-literal=NEW_RELIC_MEMORY_LIMIT_MIB=100 \
>
-n newrelic

Update the values:

  • Replace YOUR_LICENSE_KEY_HERE with your actual New Relic license key
  • Replace https://otlp.nr-data.net:4318 with your region's endpoint (refer to OTLP endpoint documentation)
  • Replace 100 with your desired memory limit in MiB for the collector (default: 100 MiB). Adjust based on your environment's needs

Configure Elasticsearch monitoring

Create a values.yaml file to configure the OpenTelemetry Collector for Elasticsearch monitoring:

Sugerencia

Customize for your environment: Update the following values in the configuration:

Required changes:

  • Pod label rule - The rule labels["app"] == "elasticsearch" must match your pod labels. If your Elasticsearch pods use different labels (e.g., app: es-cluster), update the rule accordingly:
    rule: type == "pod" && labels["app"] == "es-cluster"
  • Cluster name - Replace elasticsearch-cluster with a unique name to identify your cluster in New Relic. This name will be used to create and identify your Elasticsearch entities in the New Relic UI. Choose a name that's unique across your New Relic account (e.g., prod-es-k8s, staging-elasticsearch)

Optional changes:

  • Port - Update 9200 if Elasticsearch runs on a different port
  • Authentication - Add credentials if your Elasticsearch cluster is secured
mode: deployment
image:
repository: otel/opentelemetry-collector-contrib
pullPolicy: IfNotPresent
command:
name: otelcol-contrib
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 200m
memory: 256Mi
extraEnvs:
- name: NEWRELIC_LICENSE_KEY
valueFrom:
secretKeyRef:
name: newrelic-licenses
key: NEWRELIC_LICENSE_KEY
- name: NEWRELIC_OTLP_ENDPOINT
valueFrom:
secretKeyRef:
name: newrelic-licenses
key: NEWRELIC_OTLP_ENDPOINT
- name: NEW_RELIC_MEMORY_LIMIT_MIB
valueFrom:
secretKeyRef:
name: newrelic-licenses
key: NEW_RELIC_MEMORY_LIMIT_MIB
- name: K8S_CLUSTER_NAME
value: "elasticsearch-cluster"
clusterRole:
create: true
rules:
- apiGroups: [""]
resources: ["pods", "nodes", "nodes/stats", "nodes/proxy"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["replicasets"]
verbs: ["get", "list", "watch"]
config:
extensions:
health_check:
endpoint: 0.0.0.0:13133
k8s_observer:
auth_type: serviceAccount
observe_pods: true
observe_nodes: true
receivers:
receiver_creator/elasticsearch:
watch_observers: [k8s_observer]
receivers:
elasticsearch:
rule: type == "pod" && labels["app"] == "elasticsearch"
config:
endpoint: 'http://`endpoint`:9200'
collection_interval: 30s
metrics:
elasticsearch.os.cpu.usage:
enabled: true
elasticsearch.cluster.data_nodes:
enabled: true
elasticsearch.cluster.health:
enabled: true
elasticsearch.cluster.in_flight_fetch:
enabled: true
elasticsearch.cluster.nodes:
enabled: true
elasticsearch.cluster.pending_tasks:
enabled: true
elasticsearch.cluster.shards:
enabled: true
elasticsearch.cluster.state_update.time:
enabled: true
elasticsearch.index.documents:
enabled: true
elasticsearch.index.operations.merge.current:
enabled: true
elasticsearch.index.operations.time:
enabled: true
elasticsearch.node.cache.count:
enabled: true
elasticsearch.node.cache.evictions:
enabled: true
elasticsearch.node.cache.memory.usage:
enabled: true
elasticsearch.node.shards.size:
enabled: true
elasticsearch.node.cluster.io:
enabled: true
elasticsearch.node.documents:
enabled: true
elasticsearch.node.disk.io.read:
enabled: true
elasticsearch.node.disk.io.write:
enabled: true
elasticsearch.node.fs.disk.available:
enabled: true
elasticsearch.node.fs.disk.total:
enabled: true
elasticsearch.node.http.connections:
enabled: true
elasticsearch.node.ingest.documents.current:
enabled: true
elasticsearch.node.ingest.operations.failed:
enabled: true
elasticsearch.node.open_files:
enabled: true
elasticsearch.node.operations.completed:
enabled: true
elasticsearch.node.operations.current:
enabled: true
elasticsearch.node.operations.get.completed:
enabled: true
elasticsearch.node.operations.time:
enabled: true
elasticsearch.node.shards.reserved.size:
enabled: true
elasticsearch.index.shards.size:
enabled: true
elasticsearch.os.cpu.load_avg.1m:
enabled: true
elasticsearch.os.cpu.load_avg.5m:
enabled: true
elasticsearch.os.cpu.load_avg.15m:
enabled: true
elasticsearch.os.memory:
enabled: true
jvm.gc.collections.count:
enabled: true
jvm.gc.collections.elapsed:
enabled: true
jvm.memory.heap.max:
enabled: true
jvm.memory.heap.used:
enabled: true
jvm.memory.heap.utilization:
enabled: true
jvm.threads.count:
enabled: true
elasticsearch.index.segments.count:
enabled: true
elasticsearch.index.operations.completed:
enabled: true
elasticsearch.node.script.cache_evictions:
enabled: false
elasticsearch.node.cluster.connections:
enabled: false
elasticsearch.node.pipeline.ingest.documents.preprocessed:
enabled: false
elasticsearch.node.thread_pool.tasks.queued:
enabled: false
elasticsearch.cluster.published_states.full:
enabled: false
jvm.memory.pool.max:
enabled: false
elasticsearch.node.script.compilation_limit_triggered:
enabled: false
elasticsearch.node.shards.data_set.size:
enabled: false
elasticsearch.node.pipeline.ingest.documents.current:
enabled: false
elasticsearch.cluster.state_update.count:
enabled: false
elasticsearch.node.fs.disk.free:
enabled: false
jvm.memory.nonheap.used:
enabled: false
jvm.memory.pool.used:
enabled: false
elasticsearch.node.translog.size:
enabled: false
elasticsearch.node.thread_pool.threads:
enabled: false
elasticsearch.cluster.state_queue:
enabled: false
elasticsearch.node.translog.operations:
enabled: false
elasticsearch.memory.indexing_pressure:
enabled: false
elasticsearch.node.ingest.documents:
enabled: false
jvm.classes.loaded:
enabled: false
jvm.memory.heap.committed:
enabled: false
elasticsearch.breaker.memory.limit:
enabled: false
elasticsearch.indexing_pressure.memory.total.replica_rejections:
enabled: false
elasticsearch.breaker.memory.estimated:
enabled: false
elasticsearch.cluster.published_states.differences:
enabled: false
jvm.memory.nonheap.committed:
enabled: false
elasticsearch.node.translog.uncommitted.size:
enabled: false
elasticsearch.node.script.compilations:
enabled: false
elasticsearch.node.pipeline.ingest.operations.failed:
enabled: false
elasticsearch.indexing_pressure.memory.limit:
enabled: false
elasticsearch.breaker.tripped:
enabled: false
elasticsearch.indexing_pressure.memory.total.primary_rejections:
enabled: false
elasticsearch.node.thread_pool.tasks.finished:
enabled: false
processors:
memory_limiter:
check_interval: 60s
limit_mib: ${env:NEW_RELIC_MEMORY_LIMIT_MIB}
cumulativetodelta: {}
resource/cluster:
attributes:
- key: k8s.cluster.name
value: "${env:K8S_CLUSTER_NAME}"
action: insert
resource/cluster_name_override:
attributes:
- key: elasticsearch.cluster.name
value: "${env:K8S_CLUSTER_NAME}"
action: upsert
resourcedetection:
detectors: [env, system]
system:
resource_attributes:
host.name:
enabled: true
host.id:
enabled: true
os.type:
enabled: true
batch:
timeout: 10s
send_batch_size: 1024
attributes/cardinality_reduction:
actions:
- key: process.pid
action: delete
- key: process.parent_pid
action: delete
- key: k8s.pod.uid
action: delete
transform/metadata_nullify:
metric_statements:
- context: metric
statements:
- set(description, "")
- set(unit, "")
exporters:
otlphttp:
endpoint: "${env:NEWRELIC_OTLP_ENDPOINT}"
headers:
api-key: "${env:NEWRELIC_LICENSE_KEY}"
service:
extensions: [health_check, k8s_observer]
pipelines:
metrics/elasticsearch:
receivers: [receiver_creator/elasticsearch]
processors: [memory_limiter, resourcedetection, resource/cluster, resource/cluster_name_override, attributes/cardinality_reduction, cumulativetodelta, transform/metadata_nullify, batch]
exporters: [otlphttp]

Sugerencia

For secured Elasticsearch clusters: If your Elasticsearch cluster requires authentication, add credentials to the receiver configuration:

receiver_creator/elasticsearch:
watch_observers: [k8s_observer]
receivers:
elasticsearch:
rule: type == "pod" && labels["app"] == "elasticsearch"
config:
endpoint: 'https://`endpoint`:9200'
username: "your_elasticsearch_username"
password: "your_elasticsearch_password"
tls:
insecure_skip_verify: false

Store credentials securely using Kubernetes secrets rather than hardcoding them in the values file.

Install with Helm

Install the OpenTelemetry Collector using Helm with your values.yaml configuration:

bash
$
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
$
helm repo update
$
helm upgrade --install elasticsearch-otel-collector open-telemetry/opentelemetry-collector \
>
--namespace newrelic \
>
--create-namespace \
>
-f values.yaml

Verify deployment and data collection

Verify that the OpenTelemetry Collector is running and collecting Elasticsearch data:

  1. Check that the collector pods are running:

    bash
    $
    kubectl get pods -n newrelic --watch

    You should see pods with names like elasticsearch-otel-collector-<hash> in a Running state.

  2. Check the collector logs for any errors:

    bash
    $
    kubectl logs -n newrelic -l app.kubernetes.io/name=opentelemetry-collector -f

    Look for successful connections to Elasticsearch pods and New Relic. If you see errors, refer to the troubleshooting guide.

  3. Run an NRQL query in New Relic to confirm data is arriving (replace elasticsearch-cluster with your cluster name):

    FROM Metric
    SELECT *
    WHERE metricName LIKE 'elasticsearch.%'
    AND instrumentation.provider = 'opentelemetry'
    AND k8s.cluster.name = 'elasticsearch-cluster'
    SINCE 10 minutes ago

Sugerencia

Correlate APM with Elasticsearch: To connect your APM application and Elasticsearch cluster, include the resource attribute es.cluster.name="your-cluster-name" in your APM metrics. This enables cross-service visibility and faster troubleshooting within New Relic.

Troubleshooting

If you encounter issues during installation or don't see data in New Relic, see our comprehensive troubleshooting guide for step-by-step solutions to common problems.

For Kubernetes-specific issues like pod discovery, RBAC permissions, or network connectivity, refer to the Kubernetes troubleshooting section.

Copyright © 2026 New Relic Inc.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.