Fixed
For Kubernetes versions higher than 1.21: having the apiServer flag service-account-extend-token-expiration
set to false was causing the kubelet
scraper pod to be restarted each time the token expired (each hour).
In EKS due to its implementation a pod restart was caused each 90 days.
Changed
Updated several dependencies
Full changelog: https://github.com/newrelic/nri-kubernetes/compare/v3.4.0...v3.4.1
Added
- Support for new k8s versions 1.23 & 1.24
- Add k8s v1.23 & v1.24 new metrics:
apiserverCurrentInflightRequestsMutating
apiserverCurrentInflightRequestsReadOnly
containerOOMEventsDelta
nodeCollectorEvictionsDelta
schedulerPendingPodsActive
schedulerPendingPodsBackoff
schedulerPendingPodsUnschedulable
Full Changelog: https://github.com/newrelic/nri-kubernetes/compare/v3.3.1...v3.4.0
Added
nrFiltered
attribute has been added to K8sNamespaceSamples when using namespace filtering
Full Changelog: https://github.com/newrelic/nri-kubernetes/compare/v3.3.0...v3.3.1
Added
Now it is possible to scrape data only from selected namespaces. For further information refer to filtering namespaces
Full Changelog: https://github.com/newrelic/nri-kubernetes/compare/v3.2.1...v3.3.0
Fixed
- Fixed rounding issue for allocatable cores https://github.com/newrelic/nri-kubernetes/issues/75
Changed
- Updated base alpine image from 3.15.4 to 3.16.0
Full Changelog: https://github.com/newrelic/nri-kubernetes/compare/v3.2.0...v3.2.1
Added
- RestartCount metric for pods is now also available as
restartCountDelta
Fixed
isReady
metric is now correctly reported asfalse
(rather thanNULL
) for pending pods
Full Changelog: https://github.com/newrelic/nri-kubernetes/compare/v3.1.0...v3.1.1
Changed
- Now
kubernetes.io/tls
secrets are supported when configuring mTLS authentication for controlplane
Full Changelog: https://github.com/newrelic/nri-kubernetes/compare/v3.0.0...v3.1.0
This new version makes significant changes to the number of components that are deployed to the cluster, and introduces many new configuration options to tune the behavior to your environment. We encourage you to take a look at what's changed in full detail here.
Breaking changes
Sugerencia
The number and format of the metrics reported by version 3 of the integration have not changed with respect to earlier versions.
- The format of the
values.yml
file has changed to accommodate the newly added configuration options. Please take a look at our migration guide to see how to change your configuration.
Changed
- Our solution is now deployed in three components:
- A
DaemonSet
to monitor the Kubelet, deployed in all nodes of the cluster. - A second
DaemonSet
to monitor the control plane, deployed in master nodes only. - A
Deployment
to collect metrics from kube-state-metrics, deployed in the same node as the latter.
- A
- We now offer better control for CPU and memory limits and requests, which can be now configured for the three components individually.
- Impact of discovery and collection operations on the API server has been greatly reduced, thanks to the use of kubernetes informers.
- Logs messages have been greatly revamped to surface problems more clearly.
Added
- Comprehensive configuration options have been added to provide fine-grain control to how the integration discovers and connects to metric providers. Remarkably:
- Discovery options for control plane components have been improved. You can check the details on how discovery is configured here.
- It is now possible to collect metrics from control plane components running outside of the cluster.
- Discovery options for KSM and the kubelet have also been added.
- The interval at which metrics are collected is now configurable.
Effective Monday, 1 January 2022, our Kubernetes integration drops support for Kubernetes v1.15 and lower. The Kubernetes integration v2.8.0 and higher will only be compatible with Kubernetes versions 1.16 and higher. For more information, read this note or contact your account team.
Background
Enabling compatibility with the latest Kubernetes versions and adding new features to our Kubernetes offering prevents us from offering first-class support to versions prior to v1.16.
What is happening?
- The latest Kubernetes version v1.22 has API incompatibilities with versions prior to v1.16.
- Most major Kubernetes cloud providers have already deprecated v1.15 and lower.
What do you need to do?
It's easy: Upgrade your Kubernetes clusters to a supported version.
What happens if you don't make any changes to your account?
The Kubernetes integration may continue to work with end-of-lifed versions. However, we can't guarantee the quality of the solution as new releases may cause some incompatibilities.
Note that support requests regarding these end-of-lifed versions won't be accepted.