• /
  • EnglishEspañol日本語한국어Português
  • Inicia sesiónComenzar ahora

Kafka's integration configuration settings

This integration is open source software. That means you can browse its source code and send improvements or create your own fork and build it.

Labels and custom attributes

Environment variables can be used to control config settings, such as your , and are then passed through to the infrastructure agent. For instructions on how to use this feature, see Configure the infrastructure agent.

You can further decorate your metrics using labels. Labels allow you to add key/value pairs attributes to your metrics which you can then use to query, filter or group your metrics on.
Our default sample config file includes examples of labels but, as they are not mandatory, you can remove, modify or add new ones of your choice.

labels:
env: production
role: kafka

For more about the general structure of on-host integration configuration, see the configuration.

Inventory data

The Kafka integration captures the non-default broker and topic configuration parameters, and collects the topic partition schemes as reported by ZooKeeper. The data is available on the Inventory UI page under the config/kafka source.

Configure KafkaBrokerSample and KafkaTopicSample collection

The Kafka integration collects both Metrics and Inventory information. Check the Applies To column below to see the settings available to each collection:

Setting

Description

Default

Applies To

CLUSTER_NAME

User-defined name to uniquely identify the cluster being monitored. Required.

N/A

M/I

KAFKA_VERSION

The version of the Kafka broker you're connecting to, used for setting optimum API versions. It must match -or be lower than- the version from the broker.

Versions older than 1.0.0 may be missing some features.

Note that if the broker binary name is kafka_2.12-2.7.0, the Kafka API version to be used is 2.7.0 and the preceding 2.12 is the Scala language version.

1.0.0

M/I

AUTODISCOVER_STRATEGY

the method of discovering brokers. Options are zookeeper or bootstrap.

zookeeper

M/I

METRICS

Set to true to enable Metrics only collection.

false

INVENTORY

Set to true to enable Inventory only collection.

false

Zookeeper autodiscovery arguments

These are only relevant when the autodiscover_strategy option is set to zookeeper.

Setting

Description

Default

Applies To

ZOOKEEPER_HOSTS

The list of Apache ZooKeeper hosts (in JSON format) that need to be connected.

If CONSUMER_OFFSET is set to false KafkaBrokerSamples and KafkaTopicSamples will be collected.

[]

M/I

ZOOKEEPER_AUTH_SCHEME

The ZooKeeper authentication scheme that is used to connect. Currently, the only supported value is digest. If omitted, no authentication is used.

N/A

M/I

ZOOKEEPER_AUTH_SECRET

The ZooKeeper authentication secret that is used to connect. Should be of the form username:password. Only required if zookeeper_auth_scheme is specified.

N/A

M/I

ZOOKEEPER_PATH

The Zookeeper node under which the Kafka configuration resides. Defaults to /.

N/A

M/I

PREFERRED_LISTENER

Use a specific listener to connect to a broker. If unset, the first listener that passes a successful test connection is used. Supported values are PLAINTEXT, SASL_PLAINTEXT, SSL, and SASL_SSL.

Note the SASL_* protocols only support Kerberos (GSSAPI) authentication.

N/A

M/I

Bootstrap broker discovery arguments

These are only relevant when the autodiscover_strategy option is set tobootstrap

Setting

Description

Default

Applies To

BOOTSTRAP_BROKER_HOST

The host for the bootstrap broker.

If CONSUMER_OFFSET is set to false KafkaBrokerSamples and KafkaTopicSamples will be collected.

N/A

M/I

BOOTSTRAP_BROKER_KAFKA_PORT

The Kafka port for the bootstrap broker.

N/A

M/I

BOOTSTRAP_BROKER_KAFKA_PROTOCOL

The protocol to use to connect to the bootstrap broker. Supported values are PLAINTEXT, SASL_PLAINTEXT, SSL, and SASL_SSL.

Note the SASL_* protocols only support Kerberos (GSSAPI) authentication.

PLAINTEXT

M/I

BOOTSTRAP_BROKER_JMX_PORT

The JMX port to use for collection on each broker in the cluster.

Note that all discovered brokers should have JMX active on this port

N/A

M/I

BOOTSTRAP_BROKER_JMX_USER

The JMX user to use for collection on each broker in the cluster.

N/A

M/I

BOOTSTRAP_BROKER_JMX_PASSWORD

The JMX password to use for collection on each broker in the cluster.

N/A

M/I

JMX options

These options apply to all JMX connections on the instance.

Setting

Description

Default

Applies To

KEY_STORE

The filepath of the keystore containing the JMX client's SSL certificate.

N/A

M/I

KEY_STORE_PASSWORD

The password for the JMX SSL key store.

N/A

M/I

TRUST_STORE

The filepath of the trust keystore containing the JMX server's SSL certificate.

N/A

M/I

TRUST_STORE_PASSWORD

The password for the JMX trust store.

N/A

M/I

DEFAULT_JMX_USER

The default user that is connecting to the JMX host to collect metrics. If the username field is omitted for a JMX host, this value will be used.

admin

M/I

DEFAULT_JMX_PASSWORD

The default password to connect to the JMX host. If the password field is omitted for a JMX host, this value will be used.

admin

M/I

TIMEOUT

The timeout for individual JMX queries in milliseconds.

10000

M/I

Broker TLS connection options

You need these options if the broker protocol is SSL or SASL_SSL.

Setting

Description

Default

Applies To

TLS_CA_FILE

The certificate authority file for SSL and SASL_SSL listeners, in PEM format.

N/A

M/I

TLS_CERT_FILE

The client certificate file for SSL and SASL_SSL listeners, in PEM format.

N/A

M/I

TLS_KEY_FILE

The client key file for SSL and SASL_SSL listeners, in PEM format.

N/A

M/I

TLS_INSECURE_SKIP_VERIFY

Skip verifying the server's certificate chain and host name.

false

M/I

Broker SASL and Kerberos connection options

You need these options if the broker protocol is SASL_PLAINTEXT or SASL_SSL.

Setting

Description

Default

Applies To

SASL_MECHANISM

The type of SASL authentication to use. Supported options are SCRAM-SHA-512, SCRAM-SHA-256, PLAIN, and GSSAPI.

N/A

M/I

SASL_USERNAME

SASL username required with the PLAIN and SCRAM mechanisms.

N/A

M/I

SASL_PASSWORD

SASL password required with the PLAIN and SCRAM mechanisms.

N/A

M/I

SASL_GSSAPI_REALM

Kerberos realm required with the GSSAPI mechanism.

N/A

M/I

SASL_GSSAPI_SERVICE_NAME

Kerberos service name required with the GSSAPI mechanism.

N/A

M/I

SASL_GSSAPI_USERNAME

Kerberos username required with the GSSAPI mechanism.

N/A

M/I

SASL_GSSAPI_KEY_TAB_PATH

Kerberos key tab path required with the GSSAPI mechanism.

N/A

M/I

SASL_GSSAPI_KERBEROS_CONFIG_PATH

Kerberos config path required with the GSSAPI mechanism.

/etc/krb5.conf

M/I

SASL_GSSAPI_DISABLE_FAST_NEGOTIATION

Disable FAST negotiation.

false

M/I

Broker Collection filtering

Setting

Description

Default

Applies To

LOCAL_ONLY_COLLECTION

Collect only the metrics related to the configured bootstrap broker. Only used if autodiscover_strategy is bootstrap.

Environments that use discovery (such as Kubernetes) must be set to true because othwerwise brokers will be discovered twice: By the integration, and by the discovery mechanism, leading to duplicate data.

Note that activating this flag will skip KafkaTopicSample collection

false

M/I

TOPIC_MODE

Determines how many topics we collect. Options are all, none, list, or regex.

none

M/I

TOPIC_LIST

JSON array of topic names to monitor. Only in effect if topic_mode is set to list.

[]

M/I

TOPIC_REGEX

Regex pattern that matches the topic names to monitor. Only in effect if topic_mode is set to regex.

N/A

M/I

TOPIC_BUCKET

Used to split topic collection across multiple instances. Should be of the form <bucket number>/<number of buckets>.

1/1

M/I

COLLECT_TOPIC_SIZE

Collect the metric Topic size. Options are true or false, defaults to false.

This is a resource-intensive metric to collect, especially against many topics.

false

M/I

COLLECT_TOPIC_OFFSET

Collect the metric Topic offset. Options are true or false, defaults to false.

This is a resource-intensive metric to collect, especially against many topics.

false

M/I

Configure KafkaConsumerSample and KafkaProducerSample collection

The Kafka integration collects both Metrics(M) and Inventory(I) information. Check the Applies To column below to find which settings can be used for each specific collection:

Setting

Description

Default

Applies To

CLUSTER_NAME

User-defined name to uniquely identify the cluster being monitored. Required.

N/A

M/I

PRODUCERS

Producers to collect. For each producer a name, hostname, port, username, and password can be specified in JSON form. name is the producer’s name as it appears in Kafka. If it is not set, metrics from all producers in the host:port will be gathered. host, port, username, and password are the optional JMX settings and use the default if unspecified. It is also possible to set the value default to let the name undefined and use the default values for host, port, username and password. Required to produce KafkaProducerSample.

Examples:

[{"host": "localhost", "port": 24, "username": "me", "password": "secret"}]

[{"name": "myProducer", "host": "localhost", "port": 24, "username": "me", "password": "secret"}]

[]

M/I

CONSUMERS

Consumers to collect. For each consumer a name, hostname, port, username, and password can be specified in JSON form. name is the consumer’s name as it appears in Kafka. If it is not set, metrics from all consumers in the host:port will be gathered. host, port, username, and password are the optional JMX settings and use the default if unspecified. It is also possible to set the value default to let the name undefined and use the default values for host, port, username and password. Required to produce KafkaConsumerSample.

Examples:

[{"host": "localhost", "port": 24, "username": "me", "password": "secret"}]

[{"name": "myConsumer", "host": "localhost", "port": 24, "username": "me", "password": "secret"}]

[]

M/I

DEFAULT_JMX_HOST

The default host to collect JMX metrics. If the host field is omitted from a producer or consumer configuration, this value will be used.

localhost

M/I

DEFAULT_JMX_PORT

The default port to collect JMX metrics. If the port field is omitted from a producer or consumer configuration, this value will be used.

9999

M/I

DEFAULT_JMX_USER

The default user that is connecting to the JMX host to collect metrics. If the username field is omitted from a producer or consumer configuration, this value will be used.

admin

M/I

DEFAULT_JMX_PASSWORD

The default password to connect to the JMX host. If the password field is omitted from a producer or consumer configuration, this value will be used.

admin

M/I

METRICS

Set to true to enable Metrics only collection.

false

INVENTORY

Set to true to enable Inventory only collection.

false

JMX SSL and timeout options

These options apply to all JMX connections on the instance.

Setting

Description

Default

Applies To

KEY_STORE

The filepath of the keystore containing the JMX client's SSL certificate.

N/A

M/I

KEY_STORE_PASSWORD

The password for the JMX SSL key store.

N/A

M/I

TRUST_STORE

The filepath of the trust keystore containing the JMX server's SSL certificate.

N/A

M/I

TRUST_STORE_PASSWORD

The password for the JMX trust store.

N/A

M/I

TIMEOUT

The timeout for individual JMX queries in milliseconds.

10000

M/I

Configure KafkaOffsetSample collection

The Kafka integration collects both Metrics and Inventory information. Check the Applies To column below to find which settings can be used for each specific collection:

Setting

Description

Default

Applies To

CLUSTER_NAME

User-defined name to uniquely identify the cluster being monitored. Required.

N/A

M/I

KAFKA_VERSION

The version of the Kafka broker you're connecting to, used for setting optimum API versions. It must match -or be lower than- the version from the broker.

Versions older than 1.0.0 may be missing some features.

Note that if the broker binary name is kafka_2.12-2.7.0 the Kafka api version to be used is 2.7.0, the preceding 2.12 is the Scala language version.

1.0.0

M/I

AUTODISCOVER_STRATEGY

the method of discovering brokers. Options are zookeeper or bootstrap.

zookeeper

M/I

CONSUMER_OFFSET

Populate consumer offset data in KafkaOffsetSample if set to true.

Note that this option will skip Broker/Consumer/Producer collection and only collect KafkaOffsetSample

false

M/I

CONSUMER_GROUP_REGEX

regex pattern that matches the consumer groups to collect offset statistics for. This is limited to collecting statistics for 300 consumer groups.

Note: This option must be set when CONSUMER_OFFSET is true.

N/A

M/I

INACTIVE_CONSUMER_GROUP_OFFSET

Collects offset metrics from consumer groups without any active consumer, it requires CONSUMER_OFFSET set to true.

false

M/I

CONSUMER_GROUP_OFFSET_BY_TOPIC

Activates an extra metric aggregation for consumerGroup by topic. it requires CONSUMER_OFFSET set to true.

N/A

M/I

METRICS

Set to true to enable Metrics only collection.

false

INVENTORY

Set to true to enable Inventory only collection.

false

Zookeeper autodiscovery arguments

This is only relevant when the autodiscover_strategy option is set to zookeeper.

Setting

Description

Default

Applies To

ZOOKEEPER_HOSTS

The list of Apache ZooKeeper hosts (in JSON format) that need to be connected.

If CONSUMER_OFFSET is set to false KafkaBrokerSamples and KafkaTopicSamples will be collected.

[]

M/I

ZOOKEEPER_AUTH_SCHEME

The ZooKeeper authentication scheme that is used to connect. Currently, the only supported value is digest. If omitted, no authentication is used.

N/A

M/I

ZOOKEEPER_AUTH_SECRET

The ZooKeeper authentication secret that is used to connect. Should be of the form username:password. Only required if zookeeper_auth_scheme is specified.

N/A

M/I

ZOOKEEPER_PATH

The Zookeeper node under which the Kafka configuration resides. Defaults to /.

N/A

M/I

PREFERRED_LISTENER

Use a specific listener to connect to a broker. If unset, the first listener that passes a successful test connection is used. Supported values are PLAINTEXT, SASL_PLAINTEXT, SSL, and SASL_SSL.

Note the SASL_* protocols only support Kerberos (GSSAPI) authentication.

N/A

M/I

Bootstrap broker discovery arguments

This is only relevant when the autodiscover_strategy option is set to bootstrap.

Setting

Description

Default

Applies To

BOOTSTRAP_BROKER_HOST

The host for the bootstrap broker.

If CONSUMER_OFFSET is set to false KafkaBrokerSamples and KafkaTopicSamples will be collected.

N/A

M/I

BOOTSTRAP_BROKER_KAFKA_PORT

The Kafka port for the bootstrap broker.

N/A

M/I

BOOTSTRAP_BROKER_KAFKA_PROTOCOL

The protocol to use to connect to the bootstrap broker. Supported values are PLAINTEXT, SASL_PLAINTEXT, SSL, and SASL_SSL.

Note the SASL_* protocols only support Kerberos (GSSAPI) authentication.

PLAINTEXT

M/I

BOOTSTRAP_BROKER_JMX_PORT

The JMX port to use for collection on each broker in the cluster.

Note that all discovered brokers should have JMX active on this port

N/A

M/I

BOOTSTRAP_BROKER_JMX_USER

The JMX user to use for collection on each broker in the cluster.

N/A

M/I

BOOTSTRAP_BROKER_JMX_PASSWORD

The JMX password to use for collection on each broker in the cluster.

N/A

M/I

JMX SSL and timeout options

These apply to all JMX connections on an instance.

Setting

Description

Default

Applies To

KEY_STORE

The filepath of the keystore containing the JMX client's SSL certificate.

N/A

M/I

KEY_STORE_PASSWORD

The password for the JMX SSL key store.

N/A

M/I

TRUST_STORE

The filepath of the trust keystore containing the JMX server's SSL certificate.

N/A

M/I

TRUST_STORE_PASSWORD

The password for the JMX trust store.

N/A

M/I

DEFAULT_JMX_USER

The default user that is connecting to the JMX host to collect metrics. If the username field is omitted for a JMX host, this value will be used.

admin

M/I

DEFAULT_JMX_PASSWORD

The default password to connect to the JMX host. If the password field is omitted for a JMX host, this value will be used.

admin

M/I

TIMEOUT

The timeout for individual JMX queries in milliseconds.

10000

M/I

Troubleshooting

Copyright © 2024 New Relic Inc.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.