You can collect metrics about your Confluent Cloud-managed Kafka deployment with the OpenTelemetry Collector. The collector is a component of OpenTelemetry that collects, processes, and exports telemetry data to New Relic (or any observability backend).
This integration works by running a prometheus receiver configuration inside the OpenTelemetry collector, which scrapes Confluent Cloud's metrics API and exports that data to New Relic.
Complete the steps below to collect Kafka metrics from Confluent and export them to New Relic.
Make sure you're set up
Before you start, you need to have the for the account you want to report data to. You should also verify that:
- You have a docker daemon running
- You have Docker Compose installed
- You have a Confluent Cloud account
- You have your Confluent Cloud API key & secret available
Download or clone the example repos
Download New Relic's OpenTelemetry Examples repo as this setup uses its example collector configuration. Once installed, open the Confluent Cloud example directory. For more information, you can check the README
there as well.
Set environment variables and run the collector
- Set the API key & Secret variables for both Confluent Cloud and New relic in the
.env
file - Set the
Cluster_ID
variable with the target kafka cluster ID - (Optional) To monitor connectors or schema registry's managed by Confluent Cloud, you can un-comment the configuration in the
collector.yaml
file and set the corresponding ID in the.env
file
$# Open the confluent cloud example directory$cd newrelic-opentelemetry-examples/other-examples/collector/confluentcloud$
$# Set environment variables.$
$# run the collector in docker$docker compose up
Local Variable information
Variable | Description | Docs |
---|---|---|
| New Relic Ingest API Key | |
| Default US New Relic OTLP endpoint is https://otlp.nr-data.net:4318 | |
| ID of cluster from Confluent Cloud | |
| Cloud API key | |
| Cloud API secret | |
| (OPTIONAL) You can monitor your Confluent connectors by specifying the ID here | |
| (OPTIONAL) You can monitor your Confluent Schema Registry by specifying the ID here |
View your data in New Relic
You can view your Confluent Cloud data in a few different ways.
- Navigate to the New Relic marketplace and search for
Confluent
. The available dashboards can be installed right onto your account! - Navigate to the metrics explorer and filter for
confluent_kafka
. This data can be added to any custom alert or dashboard.
Confluent Cloud metrics
This integration covers all the Exportable metrics within the Confluent Cloud Metrics API. We have a partial list of the Exportable metrics below:
Name | Description |
---|---|
confluent_kafka_server_received_bytes | The delta count of bytes of the customer's data received from the network. Each sample is the number of bytes received since the previous data sample. The count is sampled every 60 seconds. |
confluent_kafka_server_sent_bytes | The delta count of bytes of the customer's data sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds. |
confluent_kafka_server_received_records | The delta count of records received. Each sample is the number of records received since the previous data sample. The count is sampled every 60 seconds. |
confluent_kafka_server_sent_records | The delta count of records sent. Each sample is the number of records sent since the previous data point. The count is sampled every 60 seconds. |
confluent_kafka_server_retained_bytes | The current count of bytes retained by the cluster. The count is sampled every 60 seconds. |
confluent_kafka_server_active_connection_count | The count of active authenticated connections. |
confluent_kafka_server_request_count | The delta count of requests received over the network. Each sample is the number of requests received since the previous data point. The count sampled every 60 seconds. |
confluent_kafka_server_partition_count | The number of partitions |
confluent_kafka_server_successful_authentication_count | The delta count of successful authentications. Each sample is the number of successful authentications since the previous data point. The count sampled every 60 seconds. |
confluent_kafka_server_consumer_lag_offsets | The lag between a group member's committed offset and the partition's high watermark. |