On-host integrations release notes

On-host integrations release notes

Monday, January 13, 2020 - 15:30

Notes

Follow standard procedures to install or update the New Relic integration for Kubernetes.

Changelog

  • Added: Control Plane components can now also be discovered using the tier and component labels, besides k8s-app. You can read more about this in the control plane monitoring section of the docs.
  • Added: the newrelic-infra-ctl binary is now included as part of the image.

  • Changed: The integration now uses the infrastructure agent v1.8.23. For more information refer to the infrastructure agent release notes between versions v1.5.75 and v1.8.23.
Monday, January 13, 2020 - 15:30

Notes

Follow standard procedures to install the New Relic integration for Kubernetes events.

Changelog

  • Add custom attributes support. Custom attributes are added via environment variables of the form NRI_KUBE_EVENTS_<key>=<val>.

    Example: To add an attribute called environment with a value of staging to all the events, you need to add the following environment variable to the spec of the kube-events container:

    NRI_KUBE_EVENTS_environment=staging
    

    More detailed information can be found in the integration's documentation.

  • Add retry with exponential back-off when sending events to the forwarder agent.
Wednesday, December 11, 2019 - 15:30

Notes

Follow standard procedures to install or update the New Relic integration for Kubernetes.

Changelog

  • Added: Control Plane Monitoring: from this release on the Kubernetes integration will automatically monitor your Control Plane, showing each component and their health status in the Kubernetes Cluster Explorer. Managed Kubernetes Clusters (GKE, EKS, AKS, DO, etc) are not supported, due to technical restrictions.
  • Added: KSM can now be discovered using custom pod labels. By default we look for pods with the labels k8s-app, app or app.kubernetes.io/name with the value kube-state-metrics.
    If you want to use a custom label for the discovery, you should put the label name in the environment variable named KUBE_STATE_METRICS_POD_LABEL.

    If more than 1 pod is found with the label, it will always choose the first one based on a sorted list of IP addresses.


    Example:

    # Label a specific KSM pod. Always set the value to the string "true".
    kubectl label pod kube-state-metrics newrelic-ksm=true


    Update the newrelic-infrastructure-k8s manifest to use this labeled KSM pod:

    env:
    - name: KUBE_STATE_METRICS_POD_LABEL
      value: newrelic-ksm

Tuesday, November 5, 2019 - 15:30

Notes

Follow standard procedures to install or update the New Relic integration for Kubernetes.

Changelog

Changed: The integration now uses the Infrastructure agent v1.5.75. For more information, please refer to the Infrastructure agent release notes between versions v1.5.31 and v1.5.75.

Friday, October 25, 2019 - 18:00

Notes

Follow standard procedures to install or update the New Relic integration for Kubernetes.

Changelog

Changed: The integration reverts to the Infrastructure agent v1.5.31, because there were some issues, like clusters not showing in the New Relic One entity list UI, caused by use of agent version v1.5.51.

Thursday, October 17, 2019 - 18:00

Notes

Follow standard procedures to install or update the New Relic integration for Kubernetes.

Changelog

Added: Node labels are now retrieved from the Kubernetes API and added to the K8sNodeSample. They can now be selected in the Narrow Down Entities section of the infrastructure alerts page to filter entities when using the K8sNodeSample, and can also be used on any NRQL statement when querying the K8sNodeSample. Ex:

FROM K8sNodeSample SELECT average(cpuUsedCoreMilliseconds) WHERE `label.kubernetes.io/role` = 'master' 

By default, Information retrieved from the Kubernetes API is cached for 5 minutes. The cache time can be changed with the API_SERVER_CACHE_TTL environment variable.

Changed: The integration now uses the Infrastructure Agent v1.5.51. For more information, please refer to the infrastructure agent release notes between versions v1.5.31 and v1.5.51.

Wednesday, August 28, 2019 - 18:30

Notes

Follow standard procedures to install or update the New Relic integration for Kubernetes.

Changelog

  • Changed: The integration now uses the Infrastructure Agent v1.5.31. The biggest changes were major improvements to logging and to the StorageSampler. For more information, please refer to the infrastructure agent release notes between versions v1.3.18 and v1.5.31.
Wednesday, August 21, 2019 - 18:30

Notes

Follow standard procedures to install or update the New Relic integration for Kubernetes.

Changelog

  • Fixed: The Docker image has been rebuilt to fix a regression related to the issue: moby/moby#35443.​
Thursday, August 1, 2019 - 18:30

Notes

Follow standard procedures to install or update the New Relic integration for Kubernetes.

Changelog

  • Added: Support for discovering KSMs when running with the label app.kubernetes.io/name. This should fix problems discovering KSM when deploying recent versions of their Helm chart.
Thursday, June 13, 2019 - 18:30

Notes

Follow standard procedures to install or update the New Relic integration for Kubernetes.

Changelog

  • Fixed: The Docker image has been rebuilt to fix this issue: moby/moby#35443.
Wednesday, June 5, 2019 - 18:30

Notes

Follow standard procedures to install or update the New Relic integration for Kubernetes.

Changelog

  • Fixed: The unprivileged version of Kubernetes integration was running as root after a restart of the container due to kubernetes/kubernetes#78308.
  • Fixed: Autodiscovery cache directory permissions got changed from 644 to 744 in order to let the nri-agent user write inside. This change was necessary to release an unprivileged version of the Kubernetes integration.
Wednesday, May 15, 2019 - 11:00

Notes

Follow standard procedures to install or update the New Relic integration for Kubernetes.

Changelog

  • Changed: The integration now uses the Infrastructure agent v1.3.18 instead of 1.1.14. For more information about all changes in this update, see the Infrastructure agent release notes.
Monday, April 15, 2019 - 11:00

Notes

Follow standard procedures to install or update the New Relic integration for Kubernetes.

Changelog

  • Added: The integration reports the name of the cluster as Infrastructure inventory.
  • Added: The integration reports a new event type K8sClusterSample. At this moment, these events contain only the cluster name as an attribute.
Tuesday, March 26, 2019 - 01:00

Notes

Follow standard procedures to install or update the New Relic integration for Kubernetes.

Changelog

Added: Support for kube-state-metrics (KSM) 1.5.

Added: The reason and message attributes are added to K8sPodSample. This will provide visibility into why a pod status is Failed.

For example, a pod that failed due to memory pressure on the node will report the following attributes:

  • Status: Failed
  • Reason: Evicted
  • Message: Pod The node was low on resource: [MemoryPressure]

It is possible to create an alert for any of these attributes.

Added: The memoryWorkingSetBytes attribute is added to K8sContainerSample. This metric is used by the OOMkiller to decide when a container is using too much memory compared to its limit and should therefore be killed. It will enable more precise monitoring of the memory usage for containers.

Changed: Always request metrics from kube-state-metrics in text format. In KSM's v1.5 this is the default regardless of the format requested.

Tuesday, February 12, 2019 - 03:15

Notes

Follow standard procedures to install or update the New Relic integration for Kubernetes.

Changelog

Added: namespaceName attribute was added to all the samples where the namespace attribute is present. This is to align with standard naming for attributes. Other examples: clusterName, podName, etc.

Deprecated: namespace attribute is deprecated. We recommend using namespaceName instead.

Tuesday, January 22, 2019 - 16:00

Notes

Follow standard procedures to install or update the New Relic integration for Kubernetes.

Changelog

Changed: Starting in version 1.5.0, the Kubernetes integration is not reporting the status of static pods anymore to avoid report an incorrect status.

The following Github Issue tracks the progress of the bugfix in Kubernetes source code: https://github.com/kubernetes/kubernetes/issues/61717.

Monday, January 7, 2019 - 08:00

Notes

Follow standard procedures to install or update the New Relic integration for Kubernetes.

Changelog

Changed: Starting in version 1.4.0, the Kubernetes integration is using Alpine Linux as the base image.

With Alpine the new image is smaller (16 MB instead of the previous 85 MB) and more secure. Alpine reduces the number of packages and libraries to the bare minimum, thus reducing the attack surface.

Wednesday, November 7, 2018 - 01:30

Notes

Follow standard procedures to install or update the New Relic integration for Kubernetes.

Changelog

Added: Pod Volume metrics. Monitor disk volume space and inodes for Volumes and Persistent Volumes associated with pods. See the detailed list of attributes in the documentation.

Added: Cluster name is now added to agent data as the clusterName attribute so that you can easily correlate Kubernetes integration data with agent data.

Fixed: In version 1.3.0, the volume bytes and inodes usage percentage calculation were incorrect.

Tuesday, August 21, 2018 - 23:45

Notes

Follow standard procedures to install or update the New Relic integration for Kubernetes.

Changelog

Added: Add 'reason' metric for containers in the 'Terminated' status. In the previous versions, the 'reason' was already captured for the 'Waiting' status.

Thursday, August 2, 2018 - 09:00

Notes

Follow standard procedures to install or update the New Relic integration for Kubernetes.

Changelog

Added: Support for specifying the Kubernetes API Host and Port by setting the 'KUBERNETES_SERVICE_HOST' and 'KUBERNETES_SERVICE_PORT' environment variables.

Changed: Improve readability of log messages, when verbose mode is enabled.

Fixed: Kubernetes API URL discovery failures should stop happening.

Pages