• EnglishEspañol日本語한국어Português
  • Log inStart now

Amazon EFS monitoring integration

Important

Enable the AWS CloudWatch Metric Streams integration to monitor all CloudWatch metrics from your AWS services, including custom namespaces. Individual integrations are no longer our recommended option.

New Relic's integrations include an Amazon EFS integration for reporting your EFS data to New Relic. This document explains the integration's features, how to activate it, and what data can be reported.

Features

With New Relic's integration for monitoring AWS Elastic File System (EFS), you can monitor EFS file system size, read/write operations, I/O capacity, throughput, and more. AWS integration data is also available for analysis, queries, and chart creation.

If connected through a VPC, you can also use the EFS file system with your own on-premise servers, which allows you to share file systems across different applications hosted on hybrid solutions.

Activate integration

To enable this integration follow standard procedures to Connect AWS services to New Relic.

Configuration and polling

You can change the polling frequency and filter data using configuration options.

Default polling information for the Amazon EFS integration:

  • New Relic polling interval: 5 minutes
  • Amazon CloudWatch data interval: 1 minute or 5 minutes

Find and use data

To find this integration's data, go to one.newrelic.com > All capabilities > Infrastructure > AWS and select one of the Amazon EFS integration links.

You can query and explore your data using the BlockDeviceSample event type, with a provider value of EfsFileSystem.

For more on how to find and use integration data, see Understand integration data.

Metric data

This integration collects the following Amazon EFS metrics:

Metric

Description

BurstCreditBalance

The number of burst credits that a file system has.

Burst credits allow a file system to burst to throughput levels above a file system’s baseline level for periods of time. For more information, see Throughput scaling in Amazon EFS.

The Minimum statistic is the smallest burst credit balance for any minute during the period. The Maximum statistic is the largest burst credit balance for any minute during the period. The Average statistic is the average burst credit balance during the period.

Units: Bytes

Valid statistics: Minimum, Maximum, Average

ClientConnections

The number of client connections to a file system. When using a standard client, there is one connection per mounted Amazon EC2 instance.

Note: To calculate the average ClientConnections for periods greater than one minute, divide the Sum statistic by the number of minutes in the period.

Units: Count of client connections

Valid statistics: Sum

DataReadIOBytes

The number of bytes for each file system read operation.

The Sum statistic is the total number of bytes associated with read operations. The Minimum statistic is the size of the smallest read operation during the period. The Maximum statistic is the size of the largest read operation during the period. The Average statistic is the average size of read operations during the period. The SampleCount statistic provides a count of read operations.

Units:

  • Bytes for Minimum, Maximum, Average, and Sum.

  • Count for SampleCount.

    Valid statistics: Minimum, Maximum, Average, Sum, SampleCount

DataWriteIOBytes

The number of bytes for each file system write operation.

The Sum statistic is the total number of bytes associated with write operations. The Minimum statistic is the size of the smallest write operation during the period. The Maximum statistic is the size of the largest write operation during the period. The Average statistic is the average size of write operations during the period. The SampleCount statistic provides a count of write operations.

Units:

  • Bytes are the units for the Minimum, Maximum, Average, and Sum statistics.

  • Count for SampleCount.

    Valid statistics: Minimum, Maximum, Average, Sum, SampleCount

MetadataIOBytes

The number of bytes for each metadata operation.

The Sum statistic is the total number of bytes associated with metadata operations. The Minimum statistic is the size of the smallest metadata operation during the period. The Maximum statistic is the size of the largest metadata operation during the period. The Average statistic is the size of the average metadata operation during the period. The SampleCount statistic provides a count of metadata operations.

Units:

  • Bytes are the units for the Minimum, Maximum, Average, and Sum statistics.

  • Count for SampleCount.

    Valid statistics: Minimum, Maximum, Average, Sum, SampleCount

PercentIOLimit

Shows how close a file system is to reaching the I/O limit of the General Purpose performance mode. If this metric is at 100% more often than not, consider moving your application to a file system using the Max I/O performance mode.

Note: This metric is only submitted for file systems using the General Purpose performance mode.

Units: Percent

PermittedThroughput

The maximum amount of throughput a file system is allowed, given the file system size and BurstCreditBalance. For more information, see Amazon EFS Performance.

The Minimum statistic is the smallest throughput permitted for any minute during the period. The Maximum statistic is the highest throughput permitted for any minute during the period. The Average statistic is the average throughput permitted during the period.

Units: Bytes per second

Valid statistics: Minimum, Maximum, Average

TotalIOBytes

The number of bytes for each file system operation, including data read, data write, and metadata operations.

The Sum statistic is the total number of bytes associated with all file system operations. The Minimum statistic is the size of the smallest operation during the period. The Maximum statistic is the size of the largest operation during the period. The Average statistic is the average size of an operation during the period. The SampleCount statistic provides a count of all operations.

Note: To calculate the average operations per second for a period, divide the SampleCount statistic by the number of seconds in the period. To calculate the average throughput (Bytes per second) for a period, divide the Sum statistic by the number of seconds in the period.

Units:

  • Bytes for Minimum, Maximum, Average, and Sum statistics.

  • Count for SampleCount.

    Valid statistics: Minimum, Maximum, Average, Sum, SampleCount

Copyright © 2024 New Relic Inc.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.