• /
  • ログイン
  • 無料アカウント

Set up AWS VPC flow log monitoring

BETA FEATURE

This feature is currently in open beta and still in development, but we encourage you to try it out!

Set up your AWS VPC flow logs to send them to New Relic One.

Prerequisites

New Relic One account prerequisites

AWS prerequisites

Required fields from VPC flow logs

重要

The default format for flow logs does not include all of the required fields that are required for ktranslate to work properly. You must ensure that the below fields are added or the data you'll receive in New Relic One will be incomplete.

Flow Record Field

Description

version

The VPC Flow Logs version.

srcaddr

The source address for incoming traffic, or the IPv4 or IPv6 address of the network interface for outgoing traffic. For a network interface, the IPv4 address is always its private IPv4 address.

dstaddr

The destination address for outgoing traffic, or the IPv4 or IPv6 address of the network interface for incoming traffic. For a network interface, the IPv4 address is always its private IPv4 address.

srcport

The source port of the traffic.

dstport

The destination port of the traffic.

protocol

The IANA protocol number of the traffic.

packets

The number of packets transferred during the flow.

bytes

The number of bytes transferred during the flow.

vpc-id

The ID of the VPC that contains the network interface for which the traffic is recorded.

flow-direction

The direction of the flow with respect to the interface where traffic is captured. The possible values are ingress and egress.

Set up AWS VPC flow logs monitoring in New Relic One

To send your VPC flow logs to New Relic One, follow these steps:

  1. Create a private ECR registry and upload the ktranslate image
  2. Create a Lambda function from the ECR image
  3. Validate your settings

1. Create a private ECR registry and upload the ktranslate image

  1. Authenticate to your registry by running:

    bash
    $
    aws ecr get-login-password --region $AWS_ACCOUNT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_ACCOUNT_REGION.amazonaws.com
  2. Create a repository to hold the ktranslate image by running:

    bash
    $
    aws ecr create-repository --repository-name ktranslate --image-scanning-configuration scanOnPush=true --region $AWS_ACCOUNT_REGION
  3. Pull the ktranslate image from Docker Hub by running:

    bash
    $
    docker pull kentik/ktranslate:v2
  4. Tag the image to push to your docker repository by running:

    bash
    $
    docker tag kentik/ktranslate:v2 $AWS_ACCOUNT_ID.dkr.ecr.$AWS_ACCOUNT_REGION.amazonaws.com/ktranslate:v2
  5. Push the image to your docker repository by running:

    bash
    $
    docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_ACCOUNT_REGION.amazonaws.com/ktranslate:v2

After running these steps, you should see an output similar to the following:

bash
$
The push refers to repository [$AWS_ACCOUNT_ID.dkr.ecr.$AWS_ACCOUNT_REGION.amazonaws.com/ktranslate]
$
870d899ac0b0: Pushed
$
0a4768abd477: Pushed
$
b206b92a2843: Pushed
$
22abafd3e6c9: Pushed
$
1335c3725252: Pushed
$
7188c9350e77: Pushed
$
2b75f71baacd: Pushed
$
ba50c5652654: Pushed
$
80bbd31930ea: Pushed
$
c3d2a28a326e: Pushed
$
1a058d5342cc: Pushed
$
v2: digest: sha256:4cfe36919ae954063203a80f69ca1795280117c44947a09d678b4842bb8e4dd2 size: 2624

2. Create a Lambda function from the ECR image

The Lambda function you create must reside in the same AWS Region as the S3 bucket where you store your VPC flow logs. To create a Lambda function defined as a container image, follow the following steps:

  1. Navigate to the Lambda service in your AWS console and select Create function.
  2. Select the Container image tile at the top of the screen, and:
  • Name your function.
  • Click Browse Images and choose the ktranslate image with the v2 tag you pushed to ECR.
  • Keep the architecture on x86_64, accept the default permissions, and click Create function.
  1. On the landing page for your new function, select the Configuration tab, and:
  • In General configuration, change the timeout value to 0 min 20 sec.
  • In the Permissions section, click Execution role for your function, which will open a new tab for IAM.
    • On the Permissions tab, select Attach policies and add the AmazonS3ReadOnlyAccess to grant your function access to the S3 bucket your VPC flow logs are in.
  1. Back on your function's landing page, in the Environment variables section, click Edit and add the Lambda environment variables.
  2. On the Triggers section, click Add trigger, and:
  • Select the S3 type.
  • Select the bucket where you store your VPC Flow Logs.
  • Choose the All object create events event type.
  • Optionally, if your bucket has a custom folder in the root directory outside the AWSLogs directory, you can add it in the Prefix section.
  • Accept the Recursive Invocation warning and click Add.

At this point, your Lambda function is deployed and listening for new events on your S3 bucket.

3. Validate your settings

ヒント

It can take several minutes for data to first appear in your account as the export of VPC flow logs to S3 usually runs on a 5-minute cycle.

To confirm your Lambda function is working as expected, do one of the following:

  • Go to one.newrelic.com > Explorer and you will begin to see VPC Network entities. You can click them and investigate the various metrics each one is sending.
  • Go to one.newrelic.com > Query your data and to get a quick summary of the recent VPCs that you have flow logs from, run the following NRQL query:
    FROM KFlow SELECT count(*) FACET device_name WHERE provider = 'kentik-vpc'
  • In your AWS Console, click the Monitor tab in your function's landing page, where you can track important metrics like invocation, error count, and success rate. You can also investigate the error logs from recent invocations.

ヒント

We recommend you to add serverless monitoring from New Relic One to your new Lambda function. This way, you'll proactively monitor the health of the function and get alerts in case of problems.

Find and use your metrics

All VPC flow logs exported from the ktranslate Lambda function use the KFlow namespace, via the New Relic Event API. Currently, these are the fields populated from this integration:

Attribute

Type

Description

application

String

The class of program generating the traffic in this flow record. This is derived from the lowest numeric value from l4_dst_port and l4_src_port. Common examples include http, ssh, and ftp.

dest_vpc

String

The name of the VPC the traffic in this flow record is targeting, if known.

device_name

String

The name of the VPC this flow record was exported from.

dst_addr

String

The target IPv4 address for this flow record.

dst_as

Numeric

The target Autonomous System Number for this flow record.

dst_as_name

String

The target Autonomous System Name for this flow record.

dst_endpoint

String

The target IP:Port tuple for this flow record. This is a combination of dst_addr and l4_dst_port.

dst_geo

String

The target country for this flow record, if known.

flow_direction

String

The direction of flow for this record, from the point of view of the interface where the traffic was captured. Valid options are ingress | egress.

in_bytes

Numeric

The number of bytes transferred for ingress flow records.

in_pkts

Numeric

The number of packets transferred for ingress flow records.

l4_dst_port

Numeric

The target port for this flow record.

l4_src_port

Numeric

The source port for this flow record.

out_bytes

Numeric

The number of bytes transferred for egress flow records.

out_pkts

Numeric

The number of packets transferred for egress flow records.

protocol

String

The display name of the protocol used in this flow record, derived from the numeric IANA protocol number

provider

String

This attribute is used to uniquely identify various sources of data from ktranslate. VPC flow logs will always have the value of kentik-vpc.

sample_rate

Numeric

The rate at which ktranslate samples from the various files in the S3 bucket for flow exports. (Default: 1000) This can be configured with the KENTIK_SAMPLE_RATE environment variable.

source_vpc

String

The name of the VPC the traffic in this flow record originated from, if known.

src_addr

String

The source IPv4 address for this flow record.

src_as

Numeric

The source Autonomous System Number for this flow record.

src_as_name

String

The source Autonomous System Name for this flow record.

src_endpoint

String

The source IP:Port tuple for this flow record. This is a combination of src_addr and l4_src_port.

src_geo

String

The source country for this flow record, if known.

start_time

Numeric

The time, in Unix seconds, when the first packet of the flow was received within the aggregation interval. This might be up to 60 seconds after the packet was transmitted or received on the network interface.

timestamp

Numeric

The time, in Unix seconds, when this flow record was received by the New Relic Event API.

Environment variables for AWS Lambda functions

When you're configuring your AWS Lambda function, you need to set up the following environment variables:

Key

Value

Required

KENTIK_MODE

nr1.vpc.lambda

NEW_RELIC_API_KEY

The New Relic license key for your account

NR_ACCOUNT_ID

Your New Relic account ID

NR_REGION

The New Relic datacenter region for your account. The possible values are US and EU, and by default it's set to US.

KENTIK_SAMPLE_RATE

The rate of randomized sampling ktranslate applies to the flow export objects in S3. By default, it's set to 1000. Setting this to 1 disables all sampling and ktranslate ships all flow records to New Relic One.

ヒント

For S3 objects with less than 100 flow records, ktranslate will revert to a sample rate of 1 and process every record. For S3 objects with more than 100 flow records, ktranslate will use the configured value of KENTIK_SAMPLE_RATE, which has a default of 1000. Meaning that every record in the object has a 1:1000 change of being sampled.

問題を作成する
Copyright © 2022 New Relic Inc.