New Relic provides a Helm chart to deploy OpenTelemetry Collector in your Kubernetes cluster. This Helm charts can be customized to meet your specific needs, including advanced configurations for various use cases.
The nr-k8s-otel-collector
Helm chart supports both DaemonSet
and Deployment
collectors, allowing you to choose the best fit for your use case. These collectors can be configured to customize their behavior. For more information on installing the New Relic OpenTelemetry Collector in Kubernetes, refer to the installation guide.
This document provides an overview of some of the key advanced configuration options.
Enable GKE Autopilot or Red Hat OpenShift Compatibility
To ensure compatibility with specific Kubernetes environments, you can enable provider-specific configurations. This setting ensures compatibility and proper functionality of the OpenTelemetry Collectors by adapting to the specific constraints of these environments.
Enable this option in your values.yaml
file:
provider: "GKE_AUTOPILOT" # Or "OPEN_SHIFT" if applicable
Enable LowDataMode
The LowDataMode
option is enabled by default to ingests only the metrics required by our Kubernetes UIs. This mode reduces the amount of data collected, focusing on essential metrics for Kubernetes monitoring.
Add Additional Metrics in LowDataMode
To fetch additional metrics, add new pipelines and configure the appropriate receivers and processors in your values.yaml
file using the extraConfig
section.
The following example shows how to add the cadvisor_version_info
metric to a new pipeline. You can reuse existing receivers or define your own. Processors are added to filter specific metrics and enrich them with Kubernetes attributes.
extraConfig: receivers: processors: filter/keep_cadvisor_version_info: metrics: metric: - name != "cadvisor_version_info" # Exclude all metrics except cadvisor_version_info exporters: connectors: pipelines: metrics/additional_metrics: receivers: - prometheus # This references the prometheus receiver defined above processors: - filter/keep_cadvisor_version_info - resource # Essential for basic resource attributes - k8sattributes/ksm # Essential for Kubernetes metadata enrichment - cumulativetodelta # Converts cumulative metrics to delta - batch # For efficient data sending exporters: - otlphttp/newrelic
For a comprehensive list of available receivers, processors, exporters, and pipelines that you can reuse in your configurations, refer to the New Relic Helm Charts repository.
Send data to multiple New Relic accounts
To send your Kubernetes telemetry data to multiple New Relic accounts simultaneously, inject your secondary ingest license key(s) into the OpenTelemetry Collector container and configure additional OTLP exporters.
To inject your secondary license key(s):
In the
env
section of yourvalues.yaml
file, add the following environment variable for each secondary ingest license key you want to use:daemonset:envs:- name: MY_SECONDARY_LICENSE_KEY_VAR # Choose a descriptive environment variable namevalueFrom:secretKeyRef:name: <Your Secret Name> # Name of your Kubernetes Secretkey: <Your Secret Key> # Key within the Secret that holds the license keydeployment:envs:- name: MY_SECONDARY_LICENSE_KEY_VARvalueFrom:secretKeyRef:name: <Your Secret Name>key: <Your Secret Key>In the
envForm
section of yourvalues.yaml
file, add the following environment variable for each secondary license key you want to use:daemonset:envsFrom:- secretRef:name: <Your Secret Name>deployment:envsFrom:- secretRef:name: <Your Secret Name>
To add an
otlphttp
exporter in theextraConfig
section for each additional account, referencing the injected environment variable:daemonset:configMap:extraConfig:exporters:otlphttp/secondAccount: # Unique name for this exporterendpoint: "{{include 'nrKubernetesOtel.endpoint'}}"headers:api-key: ${env:MY_SECONDARY_LICENSE_KEY_VAR} # Reference the env vardeployment:configMap:extraConfig:exporters:otlphttp/secondAccount: # Unique name for this exporterendpoint: "{{include 'nrKubernetesOtel.endpoint'}}"headers:api-key: ${env:MY_SECONDARY_LICENSE_KEY_VAR} # Reference the env var# Important: Add this exporter to the relevant pipelines belowpipelines:metrics:exporters:- otlphttp/newrelic # Original exporter- otlphttp/secondAccount # New exportertraces:exporters:- otlphttp/newrelic- otlphttp/secondAccountlogs:exporters:- otlphttp/newrelic- otlphttp/secondAccountTip
You must also add the
otlphttp/secondAccount
exporter to the relevantpipelines
(metrics, traces, logs) within yourextraConfig
for both thedaemonset
anddeployment
collectors to ensure data is actually sent through this new exporter.After updating your
values.yaml
file, apply the changes to your cluster:bash$helm upgrade nr-k8s-otel-collector newrelic/nr-k8s-otel-collector -f your-custom-values.yaml -n newrelic
Send data via a proxy
To send your Kubernetes telemetry data through a proxy, you can configure the OpenTelemetry Collector to use an HTTP proxy for outbound connections. This is particularly useful in environments where direct internet access is restricted or monitored.
You can configure the OpenTelemetry Collector to use a proxy using one of the following methods:
Add custom configurations in the Helm chart
The extraConfig
section within the values.yaml
file provide a powerful way to extend the functionality of both the daemonset
and deployment
collectors. You can choose either collector to apply additional configurations, allowing you to tailor your monitoring experience.
These options offer flexibility for integrating specific settings not included by default.
For further customization, you can refer to our comprehensive list of receivers, processors, exporters, and pipelines to reuse in your configurations.
You can employs several recommended processors in your pipeline to enhance your telemetry data's efficiency and relevance:
resource:
Ensures your metrics data contains essential resource information, adding clarity to your data analysis.k8sattributes:
Incorporates Kubernetes-specific attributes into your metrics for detailed insights into your cluster's behavior and performance.cumulativetodelta:
Transforms cumulative metrics into delta metrics for improved tracking of changes over time.batch:
Processes and exports metrics in batches, optimizing performance during data collection.
These processors work together to refine your data for more precise monitoring and alerting. Customize the settings according to your specific use case to ensure seamless Prometheus service discovery within your Kubernetes environment.
The Enable Prometheus service discovery section provides an example of how you can use extraConfig
section to set up service discovery using the standard prometheus.io/scrape
annotation.
Enable Prometheus service discovery
To enable Prometheus service discovery within your Kubernetes cluster, use the extraConfig
section in your deployment
collector's configuration. This allows the OpenTelemetry Collector to automatically discover and scrape metrics from pods annotated with prometheus.io/scrape
.
Here's an example configuration snippet to set up service discovery using the standard prometheus.io/scrape
annotation:
extraConfig: receivers: prometheus/discover: config: scrape_configs: - job_name: "auto-discovered-services" scrape_interval: 30s # Set the scrape interval to 30 seconds kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_label_app] action: drop regex: kube-state-metrics - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace target_label: __address__ separator: ; regex: (.+):(?:\d+);(.*) replacement: $1:$2 - action: replace target_label: job_label replacement: auto-discovery processors: exporters: connectors: pipelines: metrics/prom_auto_discover: receivers: - prometheus/discover processors: - resource/metrics - k8sattributes/ksm - cumulativetodelta - batch exporters: - otlphttp/newrelic