AWS Kinesis Firehose monitoring integration

Access to this feature depends on your subscription level. Requires Infrastructure Pro.

New Relic Infrastructure integrations include an integration for reporting your Amazon Kinesis Firehose data to New Relic products. This document explains how to activate this integration and describes the data that can be reported.

Features

Amazon Kinesis Firehose provides a simple way to capture and load streaming data. It can capture, transform, and load streaming data into Amazon Kinesis Data Analytics, Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service, enabling near real-time analytics.

New Relic's Kinesis Firehose integration reports data such as indexed records and bytes, counts of data copied to AWS services, the age and freshness of records, and other metric data and service metadata.

Activate integration

To enable this integration:

  1. Make sure you have installed the Infrastructure agent before you activate AWS integrations from your Infrastructure account.
  2. Follow standard procedures to Connect AWS services to Infrastructure.

Configuration and polling

You can change the polling frequency and filter data using configuration options.

Default polling for the AWS Kinesis Firehose integration:

  • New Relic polling interval: 5 minutes
  • Amazon CloudWatch data interval: 1 minute

Find and use data

To find your integration data in Infrastructure, go to infrastructure.newrelic.com > Integrations > Amazon Web Services and select one of the Kinesis Firehose integration links.

In New Relic Insights, data is attached to the QueueSample event type, with a provider value of KinesisDeliveryStream.

For more on how to use your data, see Understand and use integration data.

Metric data

This integration collects the following metrics:

Name Description
deliveryToElasticsearchBytes The number of bytes indexed to Amazon ES over the specified time period.
deliveryToElasticsearchRecords The number of records indexed to Amazon ES over the specified time period.
deliveryToElasticsearchSuccess The sum of the successfully indexed records over the sum of records that were attempted.
deliveryToRedshiftBytes The number of bytes copied to Amazon Redshift over the specified time period.
deliveryToRedshiftRecords The number of records copied to Amazon Redshift over the specified time period.
deliveryToRedshiftSuccess The sum of successful Amazon Redshift COPY commands over the sum of all Amazon Redshift COPY commands.
deliveryToS3Bytes The number of bytes delivered to Amazon S3 over the specified time period.
deliveryToS3DataFreshness The age (from getting into Kinesis Firehose to now) in seconds of the oldest record in Kinesis Firehose. Any record older than this age has been delivered to the S3 bucket.
deliveryToS3Records The number of records delivered to Amazon S3 over the specified time period.
deliveryToS3Success The sum of successful Amazon S3 put commands over the sum of all Amazon S3 put commands.
incomingBytes The number of bytes ingested into the Kinesis Firehose stream over the specified time period.
incomingRecords The number of records ingested into the Kinesis Firehose stream over the specified time period.
putRecordBatchLatency The time taken in milliseconds per PutRecordBatch operation, measured over the specified time period.
putRecordBytes The number of bytes put to the Kinesis Firehose delivery stream using PutRecord over the specified time period.
putRecordLatency The time taken in milliseconds per PutRecord operation, measured over the specified time period.

Inventory data

New Relic collects the following Kinesis Firehose inventory data. For more about inventory data, see Understand and use data.

Name Description
createTimestamp The date and time that the delivery stream was created.
destinations The destinations. Format is an array of DestinationDescription objects.
lastUpdateTimestamp The date and time that the delivery stream was last updated.
status The status of the delivery stream.
versionId Each time the destination is updated for a delivery stream, the version ID is changed, and the current version ID is required when updating the destination. This is so that the service knows it is applying the changes to the correct version of the delivery stream.

For more help

Recommendations for learning more: