The Kubernetes integration version 2 has some different settings and requirements than version 3. This document goes over the settings that are different from version 3 and that you'll need for version 2. If we don't specify anything different, you can use the settings for version 3.
Caution
New Relic has deprecated version 2 and recommends against using it. We maintain this documentation for users who are still using version 2 even though we no longer support it.
See Introduction to the Kubernetes integration to get started with the current version of Kubernetes.
To understand version 3 changes, see the Changes introduced in the Kubernetes integration version 3 document.
Monitoring control plane with integration version 2
This section covers how to configure control plane monitoring on versions 2 and earlier of the integration.
Please note that these versions had a less flexible autodiscovery options, and did not support external endpoints. We strongly recommend you to update to version 3 at your earliest convenience.
Autodiscovery and default configuration: hostNetwork
and privileged
In versions lower than v3, when the integration is deployed using privileged: false
, the hostNetwork
setting for the control plane component will be also be set to false
.
Discovery of control plane nodes and control plane components
The Kubernetes integration relies on the kubeadm
labeling conventions to discover the control plane nodes and the control plane components. This means that control plane nodes should be labeled with node-role.kubernetes.io/control-plane=""
.
The control plane components should have either the k8s-app
or the tier
and component
labels. See this table for accepted label combinations and values:
Component | Label | Endpoint |
---|---|---|
API server | Kubeadm / Kops / ClusterAPI
OpenShift
|
|
etcd | Kubeadm / Kops / ClusterAPI
OpenShift
|
|
Scheduler | Kubeadm / Kops / ClusterAPI
OpenShift
|
|
Controller manager | Kubeadm / Kops / ClusterAPI
OpenShift
|
|
When the integration detects that it's running inside a control plane node, it tries to find which components are running on the node by looking for pods that match the labels listed in the table above. For every running component, the integration makes a request to its metrics endpoint.
Configuration
Control plane monitoring is automatic for agents running inside control plane nodes. The only component that requires an extra step to run is etcd, because it uses mutual TLS authentication (mTLS) for client requests. The API Server can also be configured to be queried using the Secure Port.
Important
Control plane monitoring for OpenShift 4.x requires additional configuration. For more information, see the OpenShift 4.x Configuration section.
etcd
In order to set mTLS for querying etcd, you need to set these two configuration options:
Option | Value |
---|---|
| Name of a Kubernetes secret that contains the mTLS configuration. The secret should contain the following keys:
|
| The namespace where the secret specified in the |
API server
By default, the API server metrics are queried using the localhost:8080
unsecured endpoint. If this port is disabled, you can also query these metrics over the secure port. To enable this, set the following configuration option in the Kubernetes integration manifest file:
Option | Value |
---|---|
| The (secure) URL to query the metrics. The API server uses Ensure that the Added in version 1.15.0 |
Important
Note that the port can be different according to the secure port used by the API server.
For example, in Minikube the API server secure port is 8443 and therefore API_SERVER_ENDPOINT_URL
should be set to https://localhost:8443
OpenShift configuration
Control plane components on OpenShift 4.x use endpoint URLs that require SSL and service account based authentication. Therefore, you can't use the default endpoint URLs.
Important
When installing openshift
through Helm, specify the configuration to automatically include these endpoints. Setting openshift.enabled=true
and openshift.version="4.x"
will include the secure endpoints and enable the /var/run/crio.sock
runtime.
To configure control plane monitoring on OpenShift, uncomment the following environment variables in the customized manifest. URL values are pre-configured to the default base URLs for the control plane monitoring metrics endpoints in OpenShift 4.x.
- name: "SCHEDULER_ENDPOINT_URL" value: "https://localhost:10259 - name: "ETCD_ENDPOINT_URL" value: "https://localhost:9979" - name: "CONTROLLER_MANAGER_ENDPOINT_URL" value: "https://localhost:10257" - name: "API_SERVER_ENDPOINT_URL" value: "https://localhost:6443"
Important
Even though the custom ETCD_ENDPOINT_URL
is defined, etcd requires HTTPS and mTLS authentication to be configured. For more on configuring mTLS for etcd in OpenShift, see Set up mTLS for etcd in OpenShift.
Kubernetes logs
If you want to generate verbose logs and get version and configuration information, just check out the info below.
Monitor services running on Kubernetes
Monitoring services in Kubernetes works by leveraging our infrastructure agent and on-host integrations and an autodiscovery mechanism to point them to pods with a specified selector.
Check the Enable monitoring of services using the Helm Chart doc to know how to do it. Check out this example for version 2, which shows the yaml
config for the Redis integration added to the values.yml
file of the nri-bundle
chart.
newrelic-infrastructure: integrations_config: - name: nri-redis.yaml data: discovery: command: # Run NRI Discovery for Kubernetes # https://github.com/newrelic/nri-discovery-kubernetes exec: /var/db/newrelic-infra/nri-discovery-kubernetes --tls --port 10250 match: label.app: redis integrations: - name: nri-redis env: # using the discovered IP as the hostname address HOSTNAME: ${discovery.ip} PORT: 6379 labels: env: test
Add a service YAML to the Kubernetes integration config
If you're using Kubernetes integration version 2, you need to add an entry for this ConfigMap in the volumes
and volumeMounts
section of the DaemonSet's spec
, to ensure all the files in the ConfigMap are mounted in /etc/newrelic-infra/integrations.d/
.