The Kubernetes integration is compatible with many different platforms including GKE, EKS, AKS, OpenShift, and more. Each has a different compatibility with our integration. You can find more information in this page.
Requirements
The New Relic Kubernetes integration requires a New Relic account. If you haven't already, create your free New Relic account below to start monitoring your data today.
You'll also need a Linux distribution compatible with New Relic infrastructure agent.
Important
kube-state-metrics
v2 or higher is supported from integration version 3.6.0 or higher.Install the Kubernetes integration up to version 3.5.0 if you're using
kube-state-metrics
1.9.8 or lower.Check the
values.yaml
file if you're updatingkube-state-metrics
from v1.9.8 to v2 or higher because some variables may have changed.
Compatibility and requirements for Helm
Make sure you've Helm is installed and that the minimum supported version is v3. Version 3 of the Kubernetes integration requires Helm version 3.
Choose a display name for your cluster. For example, you could use this output:
bash$kubectl config current-context
Compatibility and requirements for Manifest
If custom manifests have been used instead of Helm, you will need to first remove the old installation using kubectl delete -f previous-manifest-file.yml
, and then proceed through the guided installer again. This will generate an updated set of manifests that can be deployed using kubectl apply -f manifest-file.yml
.
Container runtime
Our Kubernetes integration is CRI-agnostic. It's been specifically tested to be compatible with Containerd. Note that Dockershim has been removed from the Kubernetes project as of release 1.24. Read the Dockershim Removal FAQ for further details.
Compatibility
Important
If you are using Openshift, you can also use kubectl
most of the time but be careful that kubectl
does not have commands like oc login
or oc adm
. You may need to use oc
instead of kubectl
.
Our integration is compatible and is continuously tested on the following Kubernetes versions:
Versions | |
---|---|
Kubernetes cluster | 1.27 to 1.31 |
Important
Starting from Kubernetes version 1.26, @autoscaling/v2
has replaced the @autoscaling/v2beta2
API. For continued HorizontalPodAutoscaling
metric reporting, you must install kube-state-metrics
version 2.7+ on the Kubernetes version 1.26+ clusters, because only kube-state-metrics
v2.7+ can support the @autoscaling/v2
API.
Kubernetes Flavors
Kubernetes integration is compatible with different flavors. We tested the integration with the following ones:
Flavor | Notes |
---|---|
Minikube | |
Kind | |
K3s | |
Kubeadm | |
Amazon Elastic Kubernetes Service (EKS) | |
Amazon Elastic Kubernetes Service Anywhere (EKS-Anywhere) | |
Amazon Elastic Kubernetes Service on Fargate (EKS-Fargate) | |
Rancher Kubernetes Engine (RKE1) | Extra configuration is needed to instrument control plane components |
Azure Kubernetes Service (AKS) | |
Google Kubernetes Engine (GKE) | Compatible with standard and autopilot modes. |
OpenShift | Tested with version 4.14 |
VMware Tanzu | Compatible with VMware Tanzu (Pivotal Platform) version 2.5 to 2.11, and Ops Manager version 2.5 to 2.10 |
Depending on the installation method, the control plane monitoring is not available or may need extra configuration.
For example:
- Only API Server metrics are scrapable and available to instrument managed clusters (GKE, EKS, AKS) control plane because no endpoint exposes the needed metrics for etcd, scheduler and controller manager.
- To instrument Rancher control plane, since components
/metrics
are not always reachable by default and can't be autodiscovered, some extra configuration is needed.
Resource requirements
When deploying the New Relic Kubernetes integration, it is important to allocate appropriate resources to ensure the monitoring components operate efficiently.
The following are the recommended minimum resource requests and limits for each of the components deployed by the newrelic_infrastructure chart.
Kubelet component
The following containers are included in the Kubelet component pod deployed in each node.
Kubelet container
- CPU:
- Request:
100m
- Request:
- Memory:
- Request:
150M
- Limit:
300M
- Request:
Agent container
- CPU:
- Request:
100m
- Request:
- Memory:
- Request:
150M
- Limit:
300M
- Request:
Kube State Metric component
KSM container
- CPU:
- Request:
100m
- Request:
- Memory:
- Request:
150M
- Limit:
850M
- Request:
Forwarder container
- CPU:
- Request:
100m
- Request:
- Memory:
- Request:
150M
- Limit:
850M
- Request:
Control plane component
- CPU:
- Request:
100m
- Request:
- Memory:
- Request:
150M
- Limit:
300M
- Request:
Agent container
- CPU:
- Request:
100m
- Request:
- Memory:
- Request:
150M
- Limit:
300M
- Request:
The following are the recommended resources requests and limits required by other components deployed as part of the nri-bundle
Metadata injection
- CPU:
- Request:
100m
- Request:
- Memory:
- Request:
30M
- Limit:
80M
- Request:
Logging
The following containers are included in the New Relic logging pod deployed in each node.
- CPU:
- Request:
250m
- Limit:
500m
- Request:
- Memory:
- Request:
64M
- Limit:
128M
- Request:
Considerations
Cluster Size: These resource recommendations are for typical cluster sizes. Larger clusters with more nodes and pods may require increased resource allocations to handle the additional data volume.
Custom Configurations: If you enable additional features or custom configurations, consider adjusting the resources accordingly.
Monitoring and Adjustment: After deployment, monitor the resource usage of these pods and adjust the requests and limits based on actual usage to optimize performance and cost.
These resource specifications can be adjusted in the values.yaml
file of the Helm chart used for deploying the New Relic Kubernetes integration.
By ensuring these resource requirements are met, you can maintain efficient and effective monitoring of your Kubernetes cluster with New Relic.