Unless otherwise noted, configuration options for your Prometheus OpenMetrics integration with New Relic apply to both Docker and Kubernetes environments. At a minimum, the following configuration values are required:
Recommendation: Configure your New Relic license key as an environment variable named
LICENSE_KEY. This provides a more secure environment, as New Relic can load your environment variable from a mutual TLS authentication secret.
nri-prometheus-latest.yaml manifest file includes the
nri-prometheus-cfg map showing an example configuration. Use the manifest file to configure the following parameters.
- Example configuration file
The following is an example configuration file that you can save and modify to fit your needs. For more information, see the documentation about mutual TLS authentication and translating PromQL to NRQL.
# The name of your cluster. It's important to match other New Relic products to relate the data. cluster_name: "<YOUR_CLUSTER_NAME>" # When standalone is set to false nri-prometheus requires an infrastructure agent to work and send data. Defaults to true # standalone: true # How often the integration should run. Defaults to 30s. # scrape_duration: "30s" # The HTTP client timeout when fetching data from endpoints. Defaults to 5s. # scrape_timeout: "5s" # How old must the entries used for calculating the counters delta be # before the telemetry emitter expires them. Defaults to 5m. # telemetry_emitter_delta_expiration_age: "5m" # How often must the telemetry emitter check for expired delta entries. # Defaults to 5m. # telemetry_emitter_delta_expiration_check_interval: "5m" # Wether the integration should run in verbose mode or not. Defaults to false. verbose: false # Whether the integration should run in audit mode or not. Defaults to false. # Audit mode logs the uncompressed data sent to New Relic. Use this to log all data sent. # It does not include verbose mode. This can lead to a high log volume, use with care. audit: false # Wether the integration should skip TLS verification or not. Defaults to false. insecure_skip_verify: false # The label used to identify scrapable targets. Defaults to "prometheus.io/scrape". scrape_enabled_label: "prometheus.io/scrape" # Whether k8s nodes need to be labelled to be scraped or not. Defaults to true. require_scrape_enabled_label_for_nodes: true # Number of worker threads used for scraping targets. # For large clusters with many (>400) endpoints, slowly increase until scrape # time falls between the desired `scrape_duration`. # Increasing this value too much will result in huge memory consumption if too # many metrics are being scraped. # Default: 4 # worker_threads: 4 # Maximum number of metrics to keep in memory until a report is triggered. # Changing this value is not recommended unless instructed by the New Relic support team. # max_stored_metrics: 10000 # Minimum amount of time to wait between reports. Cannot be lowered than the default, 200ms. # Changing this value is not recommended unless instructed by the New Relic support team. # min_emitter_harvest_period: 200ms # targets: # - description: Secure etcd example # urls: ["https://192.168.3.1:2379", "https://192.168.3.2:2379", "https://192.168.3.3:2379"] # tls_config: # ca_file_path: "/etc/etcd/etcd-client-ca.crt" # cert_file_path: "/etc/etcd/etcd-client.crt" # key_file_path: "/etc/etcd/etcd-client.key" # Proxy to be used by the emitters when submitting metrics. It should be # in the format [scheme]://[domain]:[port]. # The emitter is the component in charge of sending the scraped metrics. # This proxy won't be used when scraping metrics from the targets. # By default it's empty, meaning that no proxy will be used. # emitter_proxy: "http://localhost:8888" # Certificate to add to the root CA that the emitter will use when # verifying server certificates. # If left empty, TLS uses the host's root CA set. # emitter_ca_file: "/path/to/cert/server.pem" # Set to true in order to stop autodiscovery in the k8s cluster. It can be useful when running the Pod with a service account # having limited privileges. Defaults to false. # disable_autodiscovery: false # Whether the emitter should skip TLS verification when submitting data. # Defaults to false. # emitter_insecure_skip_verify: false # Histogram support is based on New Relic's guidelines for higher # level metrics abstractions https://github.com/newrelic/newrelic-exporter-specs/blob/master/Guidelines.md. # To better support visualization of this data, percentiles are calculated # based on the histogram metrics and sent to New Relic. # By default, the following percentiles are calculated: 50, 95 and 99. # # percentiles: # - 50 # - 95 # - 99 # transformations: # - description: "General processing rules" # rename_attributes: # - metric_prefix: "" # attributes: # container_name: "containerName" # pod_name: "podName" # namespace: "namespaceName" # node: "nodeName" # container: "containerName" # pod: "podName" # deployment: "deploymentName" # ignore_metrics: # # Ignore all the metrics except the ones listed below. # # This is a list that complements the data retrieved by the New # # Relic Kubernetes Integration, that's why Pods and containers are # # not included, because they are already collected by the # # Kubernetes Integration. # - except: # - kube_hpa_ # - kube_daemonset_ # - kube_statefulset_ # - kube_endpoint_ # - kube_service_ # - kube_limitrange # - kube_node_ # - kube_poddisruptionbudget_ # - kube_resourcequota # - nr_stats # copy_attributes: # # Copy all the labels from the timeseries with metric name # # `kube_hpa_labels` into every timeseries with a metric name that # # starts with `kube_hpa_` only if they share the same `namespace` # # and `hpa` labels. # - from_metric: "kube_hpa_labels" # to_metrics: "kube_hpa_" # match_by: # - namespace # - hpa # - from_metric: "kube_daemonset_labels" # to_metrics: "kube_daemonset_" # match_by: # - namespace # - daemonset # - from_metric: "kube_statefulset_labels" # to_metrics: "kube_statefulset_" # match_by: # - namespace # - statefulset # - from_metric: "kube_endpoint_labels" # to_metrics: "kube_endpoint_" # match_by: # - namespace # - endpoint # - from_metric: "kube_service_labels" # to_metrics: "kube_service_" # match_by: # - namespace # - service # - from_metric: "kube_node_labels" # to_metrics: "kube_node_" # match_by: # - namespace # - node # integration definition files required to map metrics to entities # definition_files_path: /etc/newrelic-infra/definition-files
- Key names and definitions
Here are some key names and definitions for your Prometheus OpenMetrics config file.
Key name Description
The name of the cluster. This value will be included as the
clusterNameattribute for all metrics.
true(default): Logs debugging information.
false: Only logs error messages.
Configuration of static endpoints to be scraped by the integration. It contains a list of objects. For more information about this structure, see the documentation about target configuration.
String. The integration will check if the Kubernetes pod and service are annotated or have a label with this value to decide if it has to be scraped.
This is particularly useful when you want to limit the amount of data by ignoring metrics or including specific metrics that are sent to New Relic. Since by default we use the same label Prometheus uses to discover targets that can be scraped, most exporters that you install automatically set this label.
To keep a fine-grained control on the targets you want the integration to scrape, you can set this option to some other value (such as
newrelic/scrape) and then add the annotation or label
newrelic/scrape: "true"to your Kubernetes objects. If both are set, annotations take precedence over labels.
How often should the scraper run.
- To lower memory usage, increase this value.
- To raise memory usage, decrease this value.
The impact on memory usage is due to distributing target fetching over the scrape interval to avoid querying (and buffering) all the data at once.
30s. Valid values include
The HTTP client timeout when fetching data from endpoints.
5s. Valid values include
Number of worker threads used for scraping targets. Can be increased on environments with a high number of targets or targets with high latency, but might increase memory consumption.
4. It is not recommended to use more than 10.
Whether or not Kubernetes nodes need labels to be scraped.
Histogram support is based on New Relic's guidelines for higher level metrics abstractions.
To better support visualization of this data, percentiles are calculated based on the histogram metrics and sent to New Relic. Valid values include
Proxy used by the integration when submitting metrics:
This proxy won't be used when fetching metrics from the targets.
By default this is empty, and no proxy will be used.
Certificate to add to the root CA that the emitter will use when verifying server certificates. If left empty, TLS uses the host's root CA set.
Whether the emitter should skip TLS verification when submitting data. Default:
Set to true in order to disable autodiscovery in the k8s cluster. It can be useful when running the Pod with a service account having limited privileges. Default:
Configure objects in target key
If you want the target key in the configuration file to contain one or more objects, use the following structure in the YAML list:
A description for the URLs in this target.
||A list of strings with the URLs to be scraped.|
Authentication configuration used to send requests. It supports TLS and Mutual TLS. For more information, see the documentation about mutual TLS authentication.
- Kubernetes port and endpoint path
New Relic's Prometheus OpenMetrics integration automatically discovers which targets to scrape. To specify the port and endpoint path to be used when constructing the target, you can use the
prometheus.io/pathannotations or label in your Kubernetes pods and services. Annotations take precedence over labels.
prometheus.io/portis not present, the integration will try to scrape each
ContainerPortdefined for the service.
prometheus.io/pathis not present, the integration will default to
If a service is not running on the default
/my-metrics-pathpath, add a label to the pod
prometheus.io/path=my-metrics-path. If the path to the metrics endpoint is more complex and cannot be a valid label value (for example,
foo/bar), use annotations instead.
- Example: Labels for Kubernetes port and path
In this example, you have a deployment in your cluster, and the pods expose Prometheus metrics on port
8080and in the path
PodSpecmetadata of the deployment manifest, set the labels
prometheus.io/path: "my-metrics". When the integration tries to retrieve the metrics from your pods, it will send a request to
apiVersion: apps/v1 kind: Deployment metadata: name: my-deployment spec: replicas: 2 selector: matchLabels: app: my-app template: metadata: labels: app: my-app prometheus.io/scrape: "true" prometheus.io/port: "8080" prometheus.io/path: "my-metrics"
Reload the configuration
The Prometheus OpenMetrics integration does not automatically reload the configuration when you make changes to the configuration file.
To reload the configuration, restart the container running the integration:
docker restart nri-prometheus
To reload the configuration, restart the integration. Recommendation: Scale the deployment down to zero replicas, and then scale it back to one replica:
kubectl scale deployment nri-prometheus --replicas=0 kubectl scale deployment nri-prometheus --replicas=1
Docker: Run previous config file
Docker: To run the integration with the previous configuration file:
- Copy the content and save it to a
From within the same directory, run the command:
docker run -d --restart unless-stopped \ --name nri-prometheus \ -e CLUSTER_NAME="YOUR_CLUSTER_NAME" \ -e LICENSE_KEY="YOUR_LICENSE_KEY" \ -v "$(pwd)/config.yaml:/config.yaml" \ newrelic/nri-prometheus:latest --configfile=/config.yaml