You can collect metrics about your Confluent Cloud-managed Kafka deployment with the OpenTelemetry Collector. The collector is a component of OpenTelemetry that collects, processes, and exports telemetry data to New Relic (or any observability backend).
Complete the steps below to collect Kafka metrics from Confluent using an OpenTelemetry collector running in docker.
Make sure you're set up
Before you start, you need to have the for the account you want to report data to. You should also verify that:
- You have a docker daemon running
- You have Docker Compose installed
- You have a Confluent Cloud account
Download or clone the example repos
Download New Relic's OpenTelemetry Examples repoas this setup uses its example collector configuration. Once installed, open the Confluent Cloud example directory. For more information, you can check the README
there as well.
Add the authentication files
This example setup uses TLS to authenticate the request to Confluent Cloud. There are multiple methods to authenticate, so you should follow your company best practices and authentication methods.
TLS/SSL requires you to create keys and certificates, create your own Certificate Authority (CA), and sign the certificate.
Doing this should leave you with three files which need to be added to this directory.
Those files are referenced in this example as the follwing files:
key.pem
,cert.pem
,ca.pem
.Important
For more information about TLS authentication with Confluent Cloud, check the documentation on authenticating with TLS as well as the security tutorial.
For dev/test Confluent environments, you can simplify this by using plain text authentication.
Set environment variables and run the collector
Export the following variables or add them in a .env
file, then run the docker compose up
command.
$# Open the confluent cloud example directory$cd newrelic-opentelemetry-examples/other-examples/collector/confluentcloud$
$# Set environment variables.$export NEW_RELIC_API_KEY=<YOUR_API_KEY>$export NEW_RELIC_OTLP_ENDPOINT=https://otlp.nr-data.net$export CLUSTER_ID=<YOUR_CLUSTER_ID>$export CLUSTER_API_KEY=<YOUR_CLUSTER_API_KEY>$export CLUSTER_API_SECRET=<YOUR_CLUSTER_API_SECRET>$export CLUSTER_BOOTSTRAP_SERVER=<YOUR_CLUSTER_BOOTSTRAP_SERVER>$
$# run the collector in docker$docker compose up
Local Variable information
Variable | Description | Docs |
---|---|---|
| New Relic Ingest API Key | |
| New Relic OTLP endpoint is https://otlp.nr-data.net:4318 | |
| ID of cluster from Confluent Cloud | Available in your Confluent cluster settings |
| Cloud API key | |
| Cloud API secret | |
| Bootstrap Server for cluster | Available in your cluster settings |
View your data in New Relic
You can view your Confluent Cloud data in a few different ways.
- Navigate to the New Relic marketplace and search for
Confluent
. The available dashboards can be installed right onto your account! - Navigate to the metrics explorer and filter for
confluent_kafka
. This data can be added to any custom alert or dashboard.
Confluent Cloud metrics
This integration covers all the Exportable metrics within the Confluent Cloud Metrics API. We have a partial list of the Exportable metrics below:
Name | Description |
---|---|
confluent_kafka_server_received_bytes | The delta count of bytes of the customer's data received from the network. Each sample is the number of bytes received since the previous data sample. The count is sampled every 60 seconds. |
confluent_kafka_server_sent_bytes | The delta count of bytes of the customer's data sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds. |
confluent_kafka_server_received_records | The delta count of records received. Each sample is the number of records received since the previous data sample. The count is sampled every 60 seconds. |
confluent_kafka_server_sent_records | The delta count of records sent. Each sample is the number of records sent since the previous data point. The count is sampled every 60 seconds. |
confluent_kafka_server_retained_bytes | The current count of bytes retained by the cluster. The count is sampled every 60 seconds. |
confluent_kafka_server_active_connection_count | The count of active authenticated connections. |
confluent_kafka_server_request_count | The delta count of requests received over the network. Each sample is the number of requests received since the previous data point. The count sampled every 60 seconds. |
confluent_kafka_server_partition_count | The number of partitions |
confluent_kafka_server_successful_authentication_count | The delta count of successful authentications. Each sample is the number of successful authentications since the previous data point. The count sampled every 60 seconds. |
confluent_kafka_server_consumer_lag_offsets | The lag between a group member's committed offset and the partition's high watermark. |