• EnglishEspañol日本語한국어Português
  • Log inStart now

Collector for Confluent Cloud & Kafka monitoring

You can collect metrics about your Confluent Cloud-managed Kafka deployment with the OpenTelemetry Collector. The collector is a component of OpenTelemetry that collects, processes, and exports telemetry data to New Relic (or any observability backend).

This integration works by running a prometheus receiver configuration inside the OpenTelemetry collector, which scrapes Confluent Cloud's metrics API and exports that data to New Relic.

Complete the steps below to collect Kafka metrics from Confluent and export them to New Relic.

Make sure you're set up

Before you start, you need to have the for the account you want to report data to. You should also verify that:

Download or clone the example repos

Download New Relic's OpenTelemetry Examples repo as this setup uses its example collector configuration. Once installed, open the Confluent Cloud example directory. For more information, you can check the README there as well.

Set environment variables and run the collector

  • Set the API key & Secret variables for both Confluent Cloud and New relic in the .env file
  • Set the Cluster_ID variable with the target kafka cluster ID
  • (Optional) To monitor connectors or schema registry's managed by Confluent Cloud, you can un-comment the configuration in the collector.yaml file and set the corresponding ID in the .env file
bash
$
# Open the confluent cloud example directory
$
cd newrelic-opentelemetry-examples/other-examples/collector/confluentcloud
$
$
# Set environment variables.
$
$
# run the collector in docker
$
docker compose up

Local Variable information

Variable

Description

Docs

NEW_RELIC_API_KEY

New Relic Ingest API Key

API Key docs

NEW_RELIC_OTLP_ENDPOINT

Default US New Relic OTLP endpoint is https://otlp.nr-data.net:4318

OTLP endpoint config docs

CLUSTER_ID

ID of cluster from Confluent Cloud

Docs for list cluster ID command

CONFLUENT_API_KEY

Cloud API key

Cloud API key docs

CONFLUENT_API_SECRET

Cloud API secret

Cloud API key docs

CONNECTOR_ID

(OPTIONAL) You can monitor your Confluent connectors by specifying the ID here

Docs for list connector ID command

SCHEMA_REGISTRY_ID

(OPTIONAL) You can monitor your Confluent Schema Registry by specifying the ID here

Docs for list connector ID command

View your data in New Relic

You can view your Confluent Cloud data in a few different ways.

  • Navigate to the New Relic marketplace and search for Confluent. The available dashboards can be installed right onto your account!
  • Navigate to the metrics explorer and filter for confluent_kafka. This data can be added to any custom alert or dashboard.

Confluent Cloud metrics

This integration covers all the Exportable metrics within the Confluent Cloud Metrics API. We have a partial list of the Exportable metrics below:

Name

Description

confluent_kafka_server_received_bytes

The delta count of bytes of the customer's data received from the network. Each sample is the number of bytes received since the previous data sample. The count is sampled every 60 seconds.

confluent_kafka_server_sent_bytes

The delta count of bytes of the customer's data sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.

confluent_kafka_server_received_records

The delta count of records received. Each sample is the number of records received since the previous data sample. The count is sampled every 60 seconds.

confluent_kafka_server_sent_records

The delta count of records sent. Each sample is the number of records sent since the previous data point. The count is sampled every 60 seconds.

confluent_kafka_server_retained_bytes

The current count of bytes retained by the cluster. The count is sampled every 60 seconds.

confluent_kafka_server_active_connection_count

The count of active authenticated connections.

confluent_kafka_server_request_count

The delta count of requests received over the network. Each sample is the number of requests received since the previous data point. The count sampled every 60 seconds.

confluent_kafka_server_partition_count

The number of partitions

confluent_kafka_server_successful_authentication_count

The delta count of successful authentications. Each sample is the number of successful authentications since the previous data point. The count sampled every 60 seconds.

confluent_kafka_server_consumer_lag_offsets

The lag between a group member's committed offset and the partition's high watermark.

Copyright © 2024 New Relic Inc.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.