Do you want to monitor hardware and kernel metrics for a Linux server? You can do this with the New Relic remote write integration and the Prometheus Node Exporter. When you combine these two programs with the Prometheus monitoring system, you can send data to New Relic where you can use it for troubleshooting.
The instructions here are based on the Prometheus guide Monitoring Linux host metrics with the node exporter. We'll repeat some of that information and expand on it with steps to help you send your data to New Relic.
Prerequisites
Here's what you need to get started:
- Decide which Linux host you want to instrument. We'll show examples below for Linux servers in EC2, GCP, and Azure instances.
- Make sure you've installed the Prometheus monitoring system. If you haven't already, you can find download it from the Prometheus site.
Download and start Node Exporter
Complete the following:
Download and start Node Exporter with the commands below. Be sure to replace the
wget
URL with the latest from the Prometheus downloads page:bash$# Note that <VERSION>, <OS>, and <ARCH> are placeholders.$wget https://github.com/prometheus/node_exporter/releases/download/v<VERSION>/node_exporter-<VERSION>.<OS>-<ARCH>.tar.gz$tar xvfz node_exporter-*.*-amd64.tar.gz$cd node_exporter-*.*-amd64$./node_exporterSet Node Exporter to run in the background with the keyboard commands
CONTROL + z
andbg
. In production environments, you'd want to set this up as a service (for example, withsystemd
).
Configurations
Before you start Prometheus, you'll need to make some changes in your main prometheus.yml
configuration file. We'll start with this basic prometheus.yml
example below and add configurations to it in the remaining sections. You can copy these examples and paste them into your configuration file.
Note that job_name
is set to node
:
# my global config global: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s).
# A scrape configuration containing exactly one endpoint to scrape:# Here it's Prometheus itself.scrape_configs: # The job name is added as a label `job=<job_name>` to any time series scraped from this config. - job_name: node
Connect Prometheus to New Relic
In your prometheus.yml
, insert the remote_write
snippet from the example below. Keep the following in mind:
- This is a snippet for Prometheus v2.26 and higher. If you're using an older version, see our main remote write instructions.
- Make sure to replace
YOUR_LICENSE_KEY
with your value. - You can insert this at the bottom of the configuration file at same indentation level as the
global
section.
# my global config global: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s).# A scrape configuration containing exactly one endpoint to scrape:# Here it's Prometheus itself.scrape_configs: # The job name is added as a label `job=<job_name>` to any time series scraped from this config. - job_name: node
remote_write: - url: https://metric-api.newrelic.com/prometheus/v1/write?prometheus_server=NodeExporter authorization: credentials: YOUR_LICENSE_KEY
Set up targets
You can configure targets statically via the static_configs
parameter, or you can use dynamic discovery with one of the supported service-discovery mechanisms.
Static targets
You can set up a static configuration under a new comment # Target setup
:
Dynamic targets
Instead of configuring static targets, you can configure service discovery.
Set up the host to APM relationship
If you're monitoring an app with an APM agent on this Linux server, you'll need to make some additional configurations to enable relationship features in New Relic. These features rely on the relationship between the host and the app.
Relationships require attributes that are dropped by default in Prometheus. To get around this, you can include them through the relabel_configs
stanza in the config file.
Tip
You can see all the available meta attributes under the appropriate sd_config
in the Prometheus Configuration page.
In the examples below, we show the combination of dynamic discovery with labels. If you're using static targets, just insert the static targets shown above.
Start Prometheus
Now you can start the Prometheus scraper.
Execute the following:
./prometheus --config.file=./prometheus.ymlSet the scraper to run in the background with the keyboard commands
CONTROL + z
andbg
. In production environments, you'd want to set this up as a service (for example, withsystemd
).See your data in the New Relic UI by going to one.newrelic.com > Infrastructure > Hosts.