Monitor your NGINX servers running in Kubernetes clusters using the NRDOT collector (recommended) or OpenTelemetry collector contrib to send metrics and telemetry data to New Relic.
This Kubernetes-specific integration automatically discovers NGINX pods in your cluster and collects metrics without manual configuration for each instance. It leverages the OpenTelemetry nginxreceiver and receivercreator to dynamically monitor NGINX performance metrics, connection statistics, and server health across your containerized environment.
Set up NGINX monitoring
Choose your preferred collector and follow the steps:
Before you begin
Ensure you have:
- Valid New Relic license key
- Enable the HTTP stub status module on NGINX pod that needs to be monitored
- Add labels
appandroleto each NGINX pod that needs to be monitored - For manifest install: complete the base Kubernetes OpenTelemetry manifest installation
- For helm install: complete the base Kubernetes OpenTelemetry helm installation
Configure the NRDOT collector
Install the NRDOT collector using Kubernetes manifests or Helm.
After completing the base Kubernetes OpenTelemetry manifest installation, configure NGINX monitoring by following these steps:
Update the collector image to use NRDOT collector.
In both
deployment.yamlanddaemonset.yamlfiles in your localrendereddirectory, update the image to:image: newrelic/nrdot-collector:latestUpdate the
deployment-configmap.yamlfor NGINX monitoring:Choose one of the following configuration options based on your monitoring requirements:
Important
This option monitors NGINX only and removes other Kubernetes metrics collection. You'll delete additional collectors later to prevent unwanted metric ingestion.
Replace the content under
deployment-config.yaml: |with the below NGINX-specific configuration:extensions:health_check:endpoint: 0.0.0.0:13133k8s_observer:auth_type: serviceAccountobserve_pods: trueobserve_nodes: truereceivers:receiver_creator/nginx:watch_observers: [k8s_observer]receivers:nginx:rule: type == "pod" && labels["app"] == "nginx" && labels["role"] == "reverse-proxy" # Update with your labelsconfig:endpoint: 'http://`endpoint`:<YOUR_STUB_STATUS_PORT>/<YOUR_STUB_STATUS_PATH>' # Update port and path as neededmetrics:nginx.requests:enabled: truenginx.connections_accepted:enabled: truenginx.connections_handled:enabled: truenginx.connections_current:enabled: truecollection_interval: 30sresource_attributes:nginx.server.endpoint: 'http://`endpoint`:<YOUR_STUB_STATUS_PORT>/<YOUR_STUB_STATUS_PATH>' # Update port and path as needednginx.port: '<YOUR_STUB_STATUS_PORT>' # Update to match your configurationprocessors:batch:send_batch_max_size: 1000timeout: 30ssend_batch_size: 800memory_limiter:check_interval: 1slimit_percentage: 80spike_limit_percentage: 25resource/cluster:attributes:- key: k8s.cluster.namevalue: "<CLUSTER_NAME>" # Replace with your cluster nameaction: inserttransform/nginx:metric_statements:- context: resourcestatements:- set(attributes["nginx.display.name"], Concat(["server","k8s",attributes["k8s.cluster.name"],attributes["k8s.namespace.name"],"pod",attributes["k8s.pod.name"],"nginx",attributes["nginx.port"]], ":"))- set(attributes["nginx.deployment.name"], attributes["k8s.pod.name"])transform/metadata_nullify:metric_statements:- context: metricstatements:- set(description, "")- set(unit, "")exporters:otlp_http/newrelic:endpoint: "<YOUR_NEWRELIC_OTLP_ENDPOINT>"headers:api-key: ${env:NR_LICENSE_KEY}service:extensions: [health_check, k8s_observer]pipelines:metrics/nginx:receivers: [receiver_creator/nginx]processors: [batch, resource/cluster, transform/nginx, transform/metadata_nullify, memory_limiter]exporters: [otlp_http/newrelic]Configuration parameters
The following table describes the key configuration parameters:
Parameter
Description
<YOUR_STUB_STATUS_PORT>Replace with your NGINX stub status port (for example 80, 8080)
<YOUR_STUB_STATUS_PATH>Replace with your NGINX stub status path (for example basic_status)
<CLUSTER_NAME>Replace with your Kubernetes cluster name for identification in New Relic
<YOUR_NEWRELIC_OTLP_ENDPOINT>Update with your region's OTLP endpoint. See OTLP endpoint documentation
appandrolelabelsPod labels used to identify NGINX pods (update the rule to match your labels)
collection_intervalInterval in seconds to collect metrics. The default value is set to
30ssend_batch_max_sizeMaximum number of metrics to batch before sending. The default value is set to
1000timeoutTimeout in seconds to wait before sending batched metrics. The default value is set to
30sAdd the following sections to your existing
deployment-configmap.yaml:Extensions to add:
extensions:health_check:endpoint: 0.0.0.0:13133k8s_observer:auth_type: serviceAccountobserve_pods: trueobserve_nodes: trueReceivers to add:
receivers:receiver_creator/nginx:watch_observers: [k8s_observer]receivers:nginx:rule: type == "pod" && labels["app"] == "nginx" && labels["role"] == "reverse-proxy" # Update with your labelsconfig:endpoint: 'http://`endpoint`:<YOUR_STUB_STATUS_PORT>/<YOUR_STUB_STATUS_PATH>' # Update port and path as neededmetrics:nginx.requests:enabled: truenginx.connections_accepted:enabled: truenginx.connections_handled:enabled: truenginx.connections_current:enabled: truecollection_interval: 30sresource_attributes:nginx.server.endpoint: 'http://`endpoint`:<YOUR_STUB_STATUS_PORT>/<YOUR_STUB_STATUS_PATH>' # Update port and path as needednginx.port: '<YOUR_STUB_STATUS_PORT>' # Update to match your configurationProcessors to add:
processors:resource/cluster:attributes:- key: k8s.cluster.namevalue: "<CLUSTER_NAME>" # Replace with your cluster nameaction: inserttransform/nginx:metric_statements:- context: resourcestatements:- set(attributes["nginx.display.name"], Concat(["server","k8s",attributes["k8s.cluster.name"],attributes["k8s.namespace.name"],"pod",attributes["k8s.pod.name"],"nginx",attributes["nginx.port"]], ":"))- set(attributes["nginx.deployment.name"], attributes["k8s.pod.name"])transform/metadata_nullify:metric_statements:- context: metricstatements:- set(description, "")- set(unit, "")Service pipelines to add:
service:extensions: [health_check, k8s_observer] # Add to existing extensionspipelines:metrics/nginx:receivers: [receiver_creator/nginx]processors: [batch, resource/cluster, transform/nginx, transform/metadata_nullify, memory_limiter]exporters: [otlphttp/newrelic]Configuration parameters
The following table describes the key configuration parameters:
Parameter
Description
<YOUR_STUB_STATUS_PORT>Replace with your NGINX stub status port (for example 80, 8080)
<YOUR_STUB_STATUS_PATH>Replace with your NGINX stub status path (for example basic_status)
<CLUSTER_NAME>Replace with your Kubernetes cluster name for identification in New Relic
appandrolelabelsPod labels used to identify NGINX pods (update the rule to match your labels)
collection_intervalInterval in seconds to collect metrics. The default value is set to
30smemory_limiterProcessor used in existing Kubernetes configuration to limit memory usage
Apply the updated manifests and restart the deployment.
For NGINX-only monitoring, run these commands:
bash$kubectl apply -n newrelic -R -f rendered$kubectl delete daemonset nr-k8s-otel-collector-daemonset -n newrelic$kubectl delete deployment nr-k8s-otel-collector-kube-state-metrics -n newrelic$kubectl rollout restart deployment nr-k8s-otel-collector-deployment -n newrelicFor K8s + NGINX monitoring, run these commands:
bash$kubectl apply -n newrelic -R -f rendered$kubectl rollout restart deployment nr-k8s-otel-collector-deployment -n newrelic
Update your
values.yamlfor thenr-k8s-otel-collectorHelm chart with the following changes:a. Update the collector image repository and tag to use NRDOT:
images:collector:repository: newrelic/nrdot-collectortag: latestb. Add the following sections under
deployment.extraConfig:Important
In
deployment.extraConfig,pipelines:must be defined at the root level — not nested underservice:. The helm chart template mapsextraConfig.pipelinesintoservice.pipelines, andextraConfig.service.extensionsintoservice.extensions.Extensions:
extensions:health_check:endpoint: 0.0.0.0:13133k8s_observer:auth_type: serviceAccountobserve_pods: trueobserve_nodes: trueReceivers:
receivers:receiver_creator/nginx:watch_observers: [k8s_observer]receivers:nginx:rule: type == "pod" && labels["app"] == "nginx" && labels["role"] == "reverse-proxy" # Update with your labelsresource_attributes:nginx.server.endpoint: 'http://`endpoint`:<YOUR_STUB_STATUS_PORT>/<YOUR_STUB_STATUS_PATH>' # Update port and path as needednginx.port: '<YOUR_STUB_STATUS_PORT>' # Update to match your configurationconfig:endpoint: 'http://`endpoint`:<YOUR_STUB_STATUS_PORT>/<YOUR_STUB_STATUS_PATH>' # Update port and path as neededmetrics:nginx.requests:enabled: truenginx.connections_accepted:enabled: truenginx.connections_handled:enabled: truenginx.connections_current:enabled: truecollection_interval: 30s
Processors:
processors:
resource/cluster: attributes: - key: k8s.cluster.name value: "<CLUSTER_NAME>" # Replace with your cluster name action: insert
transform/nginx: metric_statements: - context: resource statements: - set(attributes["nginx.display.name"], Concat([ "server", "k8s", attributes["k8s.cluster.name"], attributes["k8s.namespace.name"], "pod", attributes["k8s.pod.name"], "nginx", attributes["nginx.port"] ], ":")) - set(attributes["nginx.deployment.name"], attributes["k8s.pod.name"])
transform/metadata_nullify: metric_statements: - context: metric statements: - set(description, "") - set(unit, "")Service extensions:
service: extensions: [health_check, k8s_observer]Pipelines:
pipelines: metrics/nginx: receivers: [receiver_creator/nginx] processors: [batch, resource/cluster, transform/nginx, transform/metadata_nullify, memory_limiter] exporters: [otlphttp/newrelic]Configuration parameters
The following table describes the key configuration parameters:
Parameter | Description |
|---|---|
| Replace with your NGINX stub status port (for example 80, 8080) |
| Replace with your NGINX stub status path (for example basic_status) |
| Replace with your Kubernetes cluster name for identification in New Relic |
| Pod labels used to identify NGINX pods (update the rule to match your labels) |
| Interval in seconds to collect metrics. The default value is set to |
| Maximum number of metrics to batch before sending. The default value is set to |
| Timeout in seconds to wait before sending batched metrics. The default value is set to |
| Processor used to limit memory usage. |
- Apply the updated values to your running Helm release:
$helm upgrade nr-k8s-otel-collector newrelic/nr-k8s-otel-collector \> --namespace newrelic \> --reuse-values \> -f values.yamlBefore you begin
Ensure you have:
- Valid New Relic license key
- Enable the HTTP stub status module on NGINX pod that needs to be monitored
- Add labels
appandroleto each NGINX pod that needs to be monitored - Helm installed
Configure the OpenTelemetry Collector
Deploy the OpenTelemetry Collector to your Kubernetes cluster using Helm. The collector will automatically discover and scrape metrics from your NGINX pods.
Download or create a custom values.yaml file based on the OpenTelemetry Collector values.yaml.
Update the following sections in your values.yaml file:
Set mode to deployment:
mode: deploymentReplace the image repository:
image:repository: otel/opentelemetry-collector-contribConfigure cluster role:
clusterRole:create: truerules:- apiGroups: [""]resources: ["pods", "nodes", "nodes/stats", "nodes/proxy"]verbs: ["get", "list", "watch"]- apiGroups: ["apps"]resources: ["replicasets"]verbs: ["get", "list", "watch"]Configure resource limits:
resources:limits:cpu: 250mmemory: 512MiReplace the entire config section with NGINX monitoring configuration:
config:extensions:health_check:endpoint: 0.0.0.0:13133k8s_observer:auth_type: serviceAccountobserve_pods: trueobserve_nodes: truereceivers:receiver_creator/nginx:watch_observers: [k8s_observer]receivers:nginx:rule: type == "pod" && labels["app"] == "nginx" && labels["role"] == "reverse-proxy" # Update with your labelsconfig:endpoint: 'http://`endpoint`:<YOUR_STUB_STATUS_PORT>/<YOUR_STUB_STATUS_PATH>' # Update port and path as neededmetrics:nginx.requests:enabled: truenginx.connections_accepted:enabled: truenginx.connections_handled:enabled: truenginx.connections_current:enabled: truecollection_interval: 30sresource_attributes:nginx.server.endpoint: 'http://`endpoint`:<YOUR_STUB_STATUS_PORT>/<YOUR_STUB_STATUS_PATH>' # Update port and path as needednginx.port: '<YOUR_STUB_STATUS_PORT>' # Update to match your configurationprocessors:batch:send_batch_size: 1024timeout: 30sresource/cluster:attributes:- key: k8s.cluster.namevalue: "<CLUSTER_NAME>" # Replace with your cluster nameaction: inserttransform/nginx:metric_statements:- context: resourcestatements:- set(attributes["nginx.display.name"], Concat(["server","k8s",attributes["k8s.cluster.name"],attributes["k8s.namespace.name"],"pod",attributes["k8s.pod.name"],"nginx",attributes["nginx.port"]], ":"))- set(attributes["nginx.deployment.name"], attributes["k8s.pod.name"])transform/metadata_nullify:metric_statements:- context: metricstatements:- set(description, "")- set(unit, "")exporters:otlp_http/newrelic:endpoint: "<YOUR_NEWRELIC_OTLP_ENDPOINT>" # Update for your regionheaders:api-key: "<YOUR_NEW_RELIC_LICENSE_KEY>" # Replace with your New Relic license keyservice:extensions: [health_check, k8s_observer]pipelines:metrics/nginx:receivers: [receiver_creator/nginx]processors: [batch, resource/cluster, transform/nginx, transform/metadata_nullify]exporters: [otlp_http/newrelic]
Configuration parameters
The following table describes the key configuration parameters:
Parameter | Description |
|---|---|
| Replace with your NGINX stub status port (for example 80, 8080) |
| Replace with your NGINX stub status path (for example basic_status) |
| Replace with your Kubernetes cluster name for identification in New Relic |
| Update with your region's OTLP endpoint. See OTLP endpoint documentation |
| Replace with your New Relic license key |
| Pod labels used to identify NGINX pods (update the rule to match your labels) |
| NGINX stub status endpoint path (update if using a different path) |
| Interval in seconds to collect metrics. The default value is set to |
| Number of metrics to batch before sending. The default value is set to |
| Timeout in seconds to wait before sending batched metrics. The default value is set to |
Follow the OpenTelemetry Collector Helm chart installation guide to install the collector using your custom values.yaml file.
Example commands:
$helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts$helm repo update$helm upgrade my-opentelemetry-collector open-telemetry/opentelemetry-collector -f your-custom-values.yaml -n newrelic --create-namespace --installVerify the pods are running:
bash$kubectl get pods -n newrelic --watchYou should see the OpenTelemetry Collector pods in a
Runningstate in thenewrelicnamespace.Run an NRQL query in New Relic to verify data collection. Replace the cluster name with your actual cluster name:
FROM MetricSELECT *WHERE metricName LIKE 'nginx.%'AND instrumentation.provider = 'opentelemetry'AND k8s.cluster.name = 'your-cluster-name'SINCE 10 minutes ago
View your data in New Relic
Once your setup is complete and data is flowing, you can access your NGINX metrics in New Relic dashboards and create custom alerts.
For complete instructions on accessing dashboards, querying data with NRQL, and creating alerts, see Find and query your NGINX data.
Metrics and attributes reference
This integration collects the same core NGINX metrics as the on-host deployment, with additional Kubernetes-specific resource attributes for cluster, namespace, and pod identification.
For complete metrics and attributes reference: See NGINX OpenTelemetry metrics and attributes reference for detailed descriptions of all metrics, types, and resource attributes for Kubernetes deployments.
Next steps
Explore related monitoring:
- Monitor NGINX Plus with OpenTelemetry - For commercial NGINX Plus deployments
- Monitor self-hosted NGINX with OpenTelemetry - For traditional server deployments
Kubernetes-specific resources:
- OpenTelemetry Collector on Kubernetes - Advanced collector configurations