This guide provides solutions for common issues you might encounter with the eBPF agent in a Kubernetes environment. Find your problem in the list below for specific resolution steps.
Problem
You're experiencing issues with the eBPF agent in your Kubernetes environment, such as file descriptor limits, privilege errors, or performance problems.
Solution
If you encounter file descriptor limit errors, see our dedicated file descriptor limit troubleshooting guide for detailed resolution steps.
Problem
eBPF agent fails to start due to insufficient privileges.
Solution
Verify that the eBPF agent DaemonSet has the necessary privileges. The Helm chart should automatically configure required permissions.
Check pod security context in your deployment:
bash$kubectl describe pod <ebpf-agent-pod> -n newrelicEnsure your cluster supports eBPF. Check kernel version on nodes:
bash$kubectl get nodes -o wide$# Kernel version should be 5.4 or later
Problem
eBPF agent consuming excessive resources or causing performance degradation.
Solution:
Monitor resource usage:
bash$kubectl top pods -n newrelicAdjust memory limits in your Helm values:
agent:resources:limits:memory: "2Gi" # Increase if neededrequests:memory: "512Mi"Configure data filtering to reduce load:
dropDataForNamespaces: ["kube-system", "monitoring"]dropDataServiceNameRegex: "kube-dns|otel-collector"Limit protocol monitoring if not all protocols are needed:
protocols:http:enabled: truemysql:enabled: false # Disable if not needed
Problem
eBPF agent pods not starting or unable to send data.
Solution
Check pod status:
bash$kubectl get pods -n newrelic$kubectl describe pod <ebpf-agent-pod> -n newrelicReview pod logs:
bash$kubectl logs <ebpf-agent-pod> -n newrelicVerify network connectivity:
bash$# Test from within the cluster$kubectl run test-connectivity --image=busybox --rm -it --restart=Never -- \>nslookup otlp.nr-data.net
重要
Ensure ports 4317 and 443 are unblocked at multiple levels:
- Cluster level: For Kubernetes deployments (e.g., AKS clusters), verify the cluster's network security groups allow outbound traffic on these ports
- Infrastructure level: Check that security software (e.g., Microsoft Defender, corporate firewalls) isn't blocking these ports at the infrastructure level
Port blocking can occur at both levels simultaneously, causing connectivity issues even if one level is properly configured.
Check service account and RBAC:
bash$kubectl get serviceaccount -n newrelic$kubectl get clusterrole,clusterrolebinding -l app.kubernetes.io/name=nr-ebpf-agent
Problem
Entity names not appearing correctly or data not attributed to correct services.
Solution
The eBPF agent uses Kubernetes
Serviceobjects to name entities. Ensure your applications have a corresponding service defined.apiVersion: v1kind: Servicemetadata:name: my-service # This becomes the entity namespec:selector:app: my-appIf you are missing data, ensure the namespace is not being excluded in your
values.yaml.# In values.yamldropDataForNamespaces: [] # Remove namespaces you want to monitorIn Kubernetes, entity names are derived from the Kubernetes service name for example,
mysql-database-service. On hosts or in Docker, names are a combination of the process name, its directory or container ID, and the listening port for example,ruby:/home/ubuntu/app:[5678].
Verification steps
Check for successful startup log:
bash$kubectl logs <ebpf-agent-pod> -n newrelic | grep "STEP-7"Should show:
[STEP-7] => Successfully started the eBPF Agent.Verify data flow in New Relic:
- In the New Relic UI, look for entities with
instrumentation.name = nr_ebpf. - Confirm that the entity names match your Kubernetes service names.
- In the New Relic UI, look for entities with
Test OTLP endpoint connectivity:
bash$kubectl exec -it <ebpf-agent-pod> -n newrelic -- \>curl -v https://otlp.nr-data.net:443