• English日本語한국어
  • Log inStart now

Kafka monitoring integration

The New Relic Kafka on-host integration reports metrics and configuration data from your Kafka service. We instrument all the key elements of your cluster, including brokers (both ZooKeeper and Bootstrap), producers, consumers, and topics.

To install the Kafka monitoring integration, you must run through the following steps:

  1. Prepare for the installation.
  2. Install and activate the integration.
  3. Configure the integration.
  4. Find and use data.
  5. Optionally, see Kafka's configuration settings.

Compatibility and requirements

Kafka versions

Our integration is compatible with Kafka version 3 or lower.

Please note Apache Kafka EOL Policy as you may experience unexpected results if you use an End of Life Kafka version.

Supported operating systems

  • Windows Windows
  • Linux Linux

For a comprehensive list of specific Windows and Linux versions, check the table of compatible operating systems.

System requirements

  • A New Relic account. Don't have one? Sign up for free! No credit card required.
  • If Kafka is not running on Kubernetes or Amazon ECS, you can install the infrastructure agent on a Linux or Windows OS host or on a host capable of remotely accessing where Kafka is installed. Otherwise:
  • Java version 8 or higher.
  • JMX enabled on all brokers.
  • Java-based consumers and producers only, and with JMX enabled.
  • Total number of monitored topics must be fewer than 10000.

Connectivity requirements

The integration needs to be configured and allowed to connect to:

  • Hosts listed in zookeeper_hosts over the Zookeeper protocol, using the Zookeeper authentication mechanism, if autodiscover_strategy is set to zookeeper.
  • Hosts defined in bootstrap_broker_host over the Kafka protocol, using the Kafka broker's authentication/transport mechanisms, if autodiscover_strategy is set to bootstrap.
  • All brokers in the cluster over the Kafka protocol and port, using the Kafka brokers' authentication/transport mechanisms.
  • All brokers in the cluster over the JMX protocol and port, using the authentication/transport mechanisms specified in the JMX configuration of the brokers.
  • All producers/consumers specified in producers and consumers over the JMX protocol and port, if you want producer/consumer monitoring. JMX settings for the consumer must be the same as for the brokers.

Important

By default, security groups and their equivalents in other cloud providers, in AWS don't have the required ports open by default. JMX requires two ports in order to work: the JMX port and the RMI port. These can be set to the same value when configuring the JVM to enable JMX and must be open for the integration to be able to connect to and collect metrics from brokers.

Prepare for the installation

Kafka is a complex piece of software that is built as a distributed system. For this reason, you need to ensure that the integration can contact all the required hosts and services so the data is collected correctly.

Install and activate the integration

To install the Kafka integration, follow the instructions for your environment:

Linux installation

  1. Follow the instructions for installing an integration, and replace the INTEGRATION_FILE_NAME variable with nri-kafka.

  2. Change the directory to the integrations configuration folder by running:

    bash
    $
    cd /etc/newrelic-infra/integrations.d
  3. Copy the sample configuration file by running:

    bash
    $
    sudo cp kafka-config.yml.sample kafka-config.yml
  4. Edit the kafka-config.yml configuration file with your favorite editor. Check out some configuration file examples..

Other environments

Additional notes:

Did this doc help with your installation?

Configure the integration

There are several ways to configure the integration, depending on how it was installed:

  • If enabled via KubernetesKubernetes, see Monitor services running on Kubernetes.
  • If enabled via ECSAmazon ECS, see Monitor services running on ECS.
  • If installed on-host, edit the config in the integration's YAML config file, kafka-config.yml. An integration's YAML-format configuration is where you can place required login credentials and configure how data is collected. Which options you change depend on your setup and preference. The configuration file has common settings applicable to all integrations like interval, timeout, inventory_source. To read all about these common settings refer to our Configuration Format document.

Important

If you are still using our Legacy configuration and definition files, refer to this document for help.

As with other integrations, one kafka-config.yml configuration file can have many instances of the integration collecting different brokers, consumers and producers metrics. You can see config examples with one or multiple instances in the kafka-config.yml sample files

Specific settings related to Kafka are defined using the env section of each instance in the kafka-config.yml configuration file. These settings control the connection to your Brokers, Zookeeper, and JMX as well as other security settings and features. The list of valid settings is described in Kafka's configuration settings.

The integration has two modes of operation on each instance, which are mutually exclusive, that you can set up with the CONSUMER_OFFSET parameter:

Important

These modes are are mutually exclusive because consumer offset collection takes a long time to run and has high performance requirements, in order to collect both groups of Samples, set two instances, one with each mode.

The values for these settings can be defined in several ways:

Offset monitoring

When setting CONSUMER_OFFSET = true, by default, only the metrics from consumer groups with active consumers (and consumer metrics) will be collected. To also collect the metrics from consumer groups with inactive consumers you must set INACTIVE_CONSUMER_GROUP_OFFSET to true.

When a consumer group is monitoring more than one topic, it's valuable to have consumer group metrics separated by topics, specially if one of the topics have inactive consumers, because then it's possible to spot in which topic the consumer group is having lag and if there are active consumers for that consumer group and topic.

To get consumer group metrics separated by topic, you must set CONSUMER_GROUP_OFFSET_BY_TOPIC to true (it defaults to false)

For more on how to set up offset monitoring, see Configure KafkaOffsetSample collection.

kafka-config.yml sample files

Configuration options for the integration

For more on how to find and use your data, see Kafka's configuration settings.

Find and use data

Data from this service is reported to an integration dashboard.

Kafka data is attached to the following event types:

You can query this data for troubleshooting purposes or to create charts and dashboards.

For more on how to find and use your data, see how to understand integration data.

Metrics collected by the integration

The Kafka integration collects the following metrics. Each metric name is prefixed with a category indicator and a period, such as broker. or consumer..

Copyright © 2024 New Relic Inc.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.