You can collect metrics about your Confluent Cloud-managed Kafka deployment with the OpenTelemetry Collector. The collector is a component of OpenTelemetry that collects, processes, and exports telemetry data to New Relic (or any observability backend).
Complete the steps below to collect Kafka metrics from Confluent using an OpenTelemetry collector running in docker.
Before you start, you need to have the for the account you want to report data to. You should also verify that:
Download New Relic's OpenTelemetry Examples repoas this setup uses its example collector configuration. Once installed, open the Confluent Cloud example directory. For more information, you can check the
README there as well.
This example setup uses TLS to authenticate the request to Confluent Cloud. There are multiple methods to authenticate, so you should follow your company best practices and authentication methods.
TLS/SSL requires you to create keys and certificates, create your own Certificate Authority (CA), and sign the certificate.
Doing this should leave you with three files which need to be added to this directory.
Those files are referenced in this example as the follwing files:
Export the following variables or add them in a
.env file, then run the
docker compose up command.
$# Open the confluent cloud example directory$cd newrelic-opentelemetry-examples/other-examples/collector/confluentcloud$$# Set environment variables.$export NEW_RELIC_API_KEY=<YOUR_API_KEY>$export NEW_RELIC_OTLP_ENDPOINT=https://otlp.nr-data.net$export CLUSTER_ID=<YOUR_CLUSTER_ID>$export CLUSTER_API_KEY=<YOUR_CLUSTER_API_KEY>$export CLUSTER_API_SECRET=<YOUR_CLUSTER_API_SECRET>$export CLUSTER_BOOTSTRAP_SERVER=<YOUR_CLUSTER_BOOTSTRAP_SERVER>$$# run the collector in docker$docker compose up
New Relic Ingest API Key
New Relic OTLP endpoint is https://otlp.nr-data.net:4318
ID of cluster from Confluent Cloud
Available in your Confluent cluster settings
Cloud API key
Cloud API secret
Bootstrap Server for cluster
Available in your cluster settings
You can view your Confluent Cloud data in a few different ways.
- Navigate to the New Relic marketplace and search for
Confluent. The available dashboards can be installed right onto your account!
- Navigate to the metrics explorer and filter for
confluent_kafka. This data can be added to any custom alert or dashboard.
This integration covers all the Exportable metrics within the Confluent Cloud Metrics API. We have a partial list of the Exportable metrics below:
The delta count of bytes of the customer's data received from the network. Each sample is the number of bytes received since the previous data sample. The count is sampled every 60 seconds.
The delta count of bytes of the customer's data sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.
The delta count of records received. Each sample is the number of records received since the previous data sample. The count is sampled every 60 seconds.
The delta count of records sent. Each sample is the number of records sent since the previous data point. The count is sampled every 60 seconds.
The current count of bytes retained by the cluster. The count is sampled every 60 seconds.
The count of active authenticated connections.
The delta count of requests received over the network. Each sample is the number of requests received since the previous data point. The count sampled every 60 seconds.
The number of partitions
The delta count of successful authentications. Each sample is the number of successful authentications since the previous data point. The count sampled every 60 seconds.
The lag between a group member's committed offset and the partition's high watermark.