New Relic provides Control Plane support for your Kubernetes integration, allowing you to monitor and collect metrics from your cluster's Control Plane components. That data can then be found in New Relic and used to create queries and charts.
Control plane monitoring requires Kubernetes integration version 1.11.0 or higher.
Features
We monitor and collect metrics from the following control plane components:
- ETCD: leader information, resident memory size, number of OS threads, consensus proposals data, etc. For a list of supported metrics, see ETCD data.
- API server: rate of
apiserver
requests, breakdown ofapiserver
requests by HTTP method and response code, etc. For the complete list of supported metrics, see API server data. - Scheduler: requested CPU/memory vs available on the node, tolerations to taints, any set affinity or anti-affinity, etc. For the complete list of supported metrics, see Scheduler data.
- Controller manager: resident memory size, number of OS threads created, goroutines currently existing, etc. For the complete list of supported metrics, see Controller manager data.
Compatibility and requirements
- Control plane monitoring requires Kubernetes integration version 1.11.0 or higher.
- Control plane monitoring support is not enabled for managed clusters. This is because providers (EKS, GKE, AKS, etc.) abstract away the concept of master nodes and control plane components, so that access to them is limited or non-existent.
- The unprivileged version of the Kubernetes integration does not support control plane monitoring.
- OpenShift 4.x uses control plane component metric endpoints that are different than the default.
Discovery of master nodes and control plane components
The Kubernetes integration relies on the kubeadm labeling conventions to discover the master nodes and the control plane components. This means that master nodes should be labeled with node-role.kubernetes.io/master=""
or kubernetes.io/role="master"
.
The control plane components should have either the k8s-app
or the tier
and component
labels. Refer to the following table for accepted label combinations and values:
Component | Label | Endpoint |
---|---|---|
API server |
Kubeadm / Kops / ClusterAPI
OpenShift
|
localhost:443/metrics by default (can be configured) if the request fails falls back to localhost:8080/metrics |
ETCD |
Kubeadm / Kops / ClusterAPI
OpenShift
|
localhost:4001/metrics |
Scheduler |
Kubeadm / Kops / ClusterAPI
OpenShift
|
localhost:10251/metrics |
Controller manager |
Kubeadm / Kops / ClusterAPI
OpenShift
|
localhost:10252/metrics |
When the integration detects that it is running inside a master node, it tries to find which components are running on the node by looking for pods that match the labels listed in the table above. For every running component, the integration makes a request to its metrics endpoint.
Configuration
Control plane monitoring is automatic for agents running inside master nodes. The only component that requires an extra step to run is ETCD, because it uses mutual TLS authentication (mTLS) for client requests. The API Server can also be configured to be queried using the Secure Port.
Control plane monitoring for OpenShift 4.x requires additional configuration. For more information, see the OpenShift 4.x Configuration section.
ETCD
In order to set mTLS for querying ETCD, there are two configuration options that need to be set:
Option | Value |
---|---|
ETCD_TLS_SECRET_NAME |
Name of a Kubernetes secret that contains the mTLS configuration. The secret should contain the following keys:
If the For step by step instructions on how to create a certificate and sign it with the ETCD client CA, see Set up mTLS from the ETCD client CA. |
ETCD_TLS_SECRET_NAMESPACE |
The namespace where the secret specified in the ETCD_TLS_SECRET_NAME was created. If not set, the default namespace is used. |
API server
By default, the API server metrics are queried using the localhost:8080
unsecured endpoint. If this port is disabled, you can also query these metrics over the secure port. To enable this, set the following configuration option in the Kubernetes integration manifest file:
Option | Value |
---|---|
API_SERVER_ENDPOINT_URL |
The (secure) URL to query the metrics. The API server uses Ensure that the Added in version 1.15.0 |
Note that the port can be different according to the secure port used by the API server.
For example, in Minikube the API server secure port is 8443 and therefore API_SERVER_ENDPOINT_URL
should be set to https://localhost:8443
OpenShift configuration
Control plane components on OpenShift 4.x use endpoint URLs that require SSL and service account based authentication. Therefore, the default endpoint URLs can not be used.
To configure control plane monitoring on OpenShift, uncomment the following environment variables in the customized manifest. URL values are pre-configured to the default base URLs for the control plane monitoring metrics endpoints in OpenShift 4.x.
- name: "SCHEDULER_ENDPOINT_URL" value: "https://localhost:10259 - name: "ETCD_ENDPOINT_URL" value: "https://localhost:9979" - name: "CONTROLLER_MANAGER_ENDPOINT_URL" value: "https://localhost:10257" - name: "API_SERVER_ENDPOINT_URL" value: "https://localhost:6443"
Even though the custom ETCD_ENDPOINT_URL
is defined, ETCD requires HTTPS and mTLS authentication to be configured. For more on configuring mTLS for ETCD in OpenShift, see Set up mTLS for ETCD in OpenShift.
When installing through Helm openshift
, specify the config to automatically include these endpoints. Setting openshift.enabled=true
and openshift.version="4.x"
will include the secure endpoints and enable the /var/run/crio.sock
runtime.
Set up mTLS from the ETCD client CA
The instructions below are based on the Kubernetes documentation. For more information, see Managing TLS certificates in a cluster. For OpenShift, see Set up mTLS for ETCD in OpenShift.
To set up mTLS from the ETCD client CA:
- Download and install the tool cfssl, selecting the correct binaries for your OS from the list.
-
Once installed, execute the following command:
cat <<EOF | cfssl genkey - | cfssljson -bare server { "hosts": [ "localhost" ], "CN": "newrelic-infra.pod.cluster.local", "key": { "algo": "ecdsa", "size": 256 } } EOF
This command generates two files;
server.csr
containing the PEM encoded pkcs#10 certification request andserver-key.pem
containing the PEM encoded key to the certificate to be created. -
Use the generated certificate authority (CA) of ETCD to sign your CSR. Depending on your cluster configuration, you may already have this information. For default install configuration, download the CA certificate and the private key directly from ETCD with the following commands:
kubectl cp $(kubectl get pods -l k8s-app=etcd-manager-main -n kube-system -o jsonpath="{.items[0].metadata.name}"):/etc/kubernetes/pki/etcd-manager/etcd-clients-ca.crt ./cacert -n kube-system kubectl cp $(kubectl get pods -l k8s-app=etcd-manager-main -n kube-system -o jsonpath="{.items[0].metadata.name}"):/etc/kubernetes/pki/etcd-manager/etcd-clients-ca.key ./cacert.key -n kube-system
This requires that the
etcd-manager-main
pod has the labelk8s-app=etcd-manager-main
, which is a requirement for control plane monitoring . If youretc-manager-main
pod is located in a different namespace, change the-n kube-system
flags accordingly. -
With those files downloaded, use the following command to sign your CSRF:
cfssl sign -ca cacert -ca-key cacert.key server.csr | cfssljson -bare cert
-
Create the secret that is used to retrieve the TLS config for making requests to ETC. We recommend renaming the certificate and the private key:
cp cert.pem cert && cp server-key.pem key
kubectl -n default create secret generic newrelic-infra-etcd-tls-secret --from-file=./cert --from-file=./key --from-file=./cacert
- To ease future installations
-
Use the following commands to simultaneously create the CSR, retrieve the CA, generate the certificate by signing the CSR, and create the secret with all the required fields:
cat <<EOF | cfssl genkey - | cfssljson -bare server && \ kubectl cp $(kubectl get pods -l k8s-app=etcd-manager-main -n kube-system -o jsonpath="{.items[0].metadata.name}"):/etc/kubernetes/pki/etcd-manager/etcd-clients-ca.crt ./cacert -n kube-system && \ kubectl cp $(kubectl get pods -l k8s-app=etcd-manager-main -n kube-system -o jsonpath="{.items[0].metadata.name}"):/etc/kubernetes/pki/etcd-manager/etcd-clients-ca.key ./cacert.key -n kube-system && \ cp server-key.pem key && \ cfssl sign -ca cacert -ca-key cacert.key server.csr | cfssljson -bare cert && \ cp cert.pem cert && \ kubectl -n default create secret generic newrelic-infra-etcd-tls-secret --from-file=./cert --from-file=./key --from-file=./cacert { "hosts": [ "localhost" ], "CN": "newrelic-infra.pod.cluster.local", "key": { "algo": "ecdsa", "size": 256 } } EOF
-
The last step is to update the configuration in the manifest and apply it. In the configuration section, there are two options related to ETCD mTLS:
ETCD_TLS_SECRET_NAME
with the name of the secret that we just created.ETCD_TLS_SECRET_NAMESPACE
with the namespace that we used to create the secret.
To complete the installation, add these variables to the container spec of the integration
DaemonSet
and apply the changes:- name: "ETCD_TLS_SECRET_NAME” value: "newrelic-infra-etcd-tls-secret" - name: "ETCD_TLS_SECRET_NAMESPACE" value: "default"
Set up mTLS for ETCD in OpenShift
Follow these instructions to set up mutual TLS authentication for ETCD in OpenShift 4.x:
-
Export the ETCD client certificates from the cluster to an opaque secret. In a default managed OpenShift cluster, the secret is named
kube-etcd-client-certs
and it is stored in theopenshift-monitoring
namespace.kubectl get secret/kube-etcd-client-certs -n openshift-monitoring -o yaml > etcd-secret.yaml
- Open the secret file and change the keys:
- Rename the certificate authority to
cacert
. - Rename the client certificate to
cert
. - Rename the client key to
key
.
- Rename the certificate authority to
- Optional: change the secret name and namespace to something meaningful.
- Remove these unnecessary keys in the metadata section:
creationTimestamp
resourceVersion
selfLink
uid
-
Install the manifest with its new name and namespace:
kubectl apply -f etcd-secret.yaml
- Go to Update manifest configuration (the last step under Set up MTL from ETCD client) to configure the required environment variables.
See your data
If the integration has been been set up correctly, the Kubernetes cluster explorer contains all the Control Plane components and their status in a dedicated section, as shown below.

You can also check for Control Plane data with this NRQL query:
SELECT latest(timestamp) FROM K8sApiServerSample, K8sEtcdSample, K8sSchedulerSample, K8sControllerManagerSample FACET entityName where clusterName = 'MY_CLUSTER_NAME'
If you still can't see Control Plane data, try the solution described in Kubernetes integration troubleshooting: Not seeing data.