Prometheus OpenMetrics integration (Kubernetes)

Our Prometheus OpenMetrics integration for Kubernetes scrapes Prometheus endpoints and sends the data to New Relic.

With this integration, you can:

  • Automatically identify a list of endpoints.
  • Collect metrics that are important to your business.
  • Query and visualize this data in the New Relic UI.

Reduce overhead, scale your data

In a Kubernetes environment, New Relic automatically discovers the endpoints in the same way that the Prometheus Kubernetes collector does it. The integration looks for the annotation or label. You can also identify additional static endpoints in the configuration.

Prometheus OpenMetrics - Kubernetes
Example workflow: Here is an example of the workflow using New Relic's Prometheus OpenMetrics integration for Kubernetes.

The Prometheus OpenMetrics integration gathers all your data in one place, and New Relic stores the metrics from Prometheus. This integration helps remove the overhead of managing storage and availability of the Prometheus server.

Whether you're getting started with Prometheus or have been monitoring your environment along with a separate monitoring tool, New Relic can help. To learn more about how to scale your data without the hassles of managing Prometheus and a separate dashboard tool, see New Relic's Prometheus integration blog post.


New Relic has contributed the Prometheus integration to the open source community under an Apache 2.0 license. This integration supports Prometheus protocol version 2 and Kubernetes versions 1.9 or higher. The integration was tested using Kubernetes 1.9, 1.11, and 1.13 on kops, GKE, and minikube.

The following limits apply:

  • 50 attributes per metric.
  • 50k unique timeseries per day. (A time series is a single, unique combination of a metric name and any tags or attributes.)
  • 100k data points per minute. (If you need a higher limit, contact your New Relic account representative.)

The Prometheus OpenMetrics integration allows scraping of up to 50 endpoints. If you hit this limit in your cluster, you can set the SCRAPE_ENABLED_LABEL to, and apply that label to the pods you want to scrape.

Recommendation: Always run the scraper with one replica. Adding more replicas will result in duplicated data.

What's next

Ready to get started? Here are some suggested next steps:

If you have problems with your integration, follow the troubleshooting procedures.

For more help

Recommendations for learning more: