• /
  • EnglishEspañolFrançais日本語한국어Português
  • EntrarComeçar agora

Troubleshooting

While the New Relic Kubernetes OpenTelemetry Collector is designed to be robust and reliable, issues can still arise. This troubleshooting document provides troubleshooting steps for common problems you might encounter.

General Collector Pod Issues

Check out the logs of the Collector pod that's experiencing issues. Run this command:

bash
$
kubectl logs <otel-pod-name> -n newrelic

To enable detailed DEBUG level logging for troubleshooting, set the verboseLog parameter to true in the nr-k8s-otel-collector Helm chart.

Metric collection failures

Problem: Metrics are not being collected or sent to New Relic.

Troubleshooting:

  • Verify scrape configurations: Ensure your prometheus receiver configurations within the collector's configuration (extraConfig or default) are correct.

    bash
    $
    kubectl describe configmap prometheus-config -n monitoring
  • Check pod annotations: If you're using Prometheus service discovery, confirm that your application pods have the correct prometheus.io/scrape=true annotations.

    bash
    $
    kubectl get pods --namespace=[your-namespace] --show-labels | grep 'prometheus.io/scrape=true'
  • Test network connectivity: Ensure the collector pod can reach the metric endpoints.

    bash
    $
    kubectl exec [prometheus-pod-name] -- curl <http://service:port>

Configuration overrides not taking effect

Problem: Custom configurations are not properly applied.

troubleshooting:

  • Review your values.yaml: Double-check your values.yaml file for typos or incorrect indentation in the extraConfig section.

    bash
    $
    cat helm-charts/charts/nr-k8s-otel-collector/values.yaml | grep extraConfig
  • Validate applied ConfigMaps: The Helm chart generates ConfigMaps from your values.yaml. Inspect the resulting ConfigMap to ensure your custom settings are present.

    bash
    $
    kubectl describe configmap [collector-configmap-name] -n monitoring

Collector failing to start

Problem: The OpenTelemetry collector pod fails to initialize or crashes repeatedly.

Troubleshooting:

  • Inspect pod logs: The most common first step. Look for specific error messages that indicate misconfigurations or missing dependencies.

    bash
    $
    kubectl logs [collector-pod-name] --namespace=monitoring
  • Verify environment variables: Ensure required environment variables are correctly injected.

    bash
    $
    kubectl exec [collector-pod-name] -- env | grep -i [variable-name]

Network failures

Problem: The collector cannot communicate or send data to New Relic.

Troubleshooting:

  • Check DNS resolution: Ensure the collector pod can resolve service names or New Relic endpoints.

    bash
    $
    kubectl exec [collector-pod-name] -- nslookup service-name
  • Run connectivity tests: Test connectivity to internal services or external New Relic endpoints.

    bash
    $
    kubectl exec [collector-pod-name] -- curl -I <http://service-name:port>
  • Review network policies: If you have strict network policies in your cluster, ensure they allow traffic for the OpenTelemetry Collector pods to internal services and external New Relic endpoints.

    bash
    $
    kubectl describe networkpolicy -n [namespace]

Support

If you have issues with the OpenTelemetry observability for Kubernetes, refer to:

Install OpenTelemetry Collector for Kubernetes

Instrument your Kubernetes Cluster in New Relic using OpenTelemetry Collectors.

Advanced configuration for OpenTelemetry Kubernetes

Customize your OpenTelemetry Collector configuration for Kubernetes in New Relic.

Copyright © 2025 New Relic Inc.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.