This feature is currently in open beta and still in development, but we encourage you to try it out!
Set up your AWS VPC flow logs to send them to New Relic One.
- A New Relic account. Don't have one? Sign up for free! No credit card required.
- A New Relic account ID.
- A New Relic license key.
- AWS VPC Flow Export configured to existing S3 bucket.
- Permissions to build and publish images to Amazon ECR.
- Permissions to create a Lambda function.
- AWS CLI (v 1.9.15+) installed.
- Docker installed.
The default format for flow logs does not include all of the required fields that are required for
ktranslate to work properly. You must ensure that the below fields are added or the data you'll receive in New Relic One will be incomplete.
Flow Record Field
The VPC Flow Logs version.
The source address for incoming traffic, or the IPv4 or IPv6 address of the network interface for outgoing traffic. For a network interface, the IPv4 address is always its private IPv4 address.
The destination address for outgoing traffic, or the IPv4 or IPv6 address of the network interface for incoming traffic. For a network interface, the IPv4 address is always its private IPv4 address.
The source port of the traffic.
The destination port of the traffic.
The IANA protocol number of the traffic.
The number of packets transferred during the flow.
The number of bytes transferred during the flow.
The ID of the VPC that contains the network interface for which the traffic is recorded.
The direction of the flow with respect to the interface where traffic is captured. The possible values are
To send your VPC flow logs to New Relic One, follow these steps:
- Create a private ECR registry and upload the ktranslate image
- Create a Lambda function from the ECR image
- Validate your settings
Authenticate to your registry by running:bash$aws ecr get-login-password --region $AWS_ACCOUNT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_ACCOUNT_REGION.amazonaws.com
Create a repository to hold the
ktranslateimage by running:bash$aws ecr create-repository --repository-name ktranslate --image-scanning-configuration scanOnPush=true --region $AWS_ACCOUNT_REGION
ktranslateimage from Docker Hub by running:bash$docker pull kentik/ktranslate:v2
Tag the image to push to your docker repository by running:bash$docker tag kentik/ktranslate:v2 $AWS_ACCOUNT_ID.dkr.ecr.$AWS_ACCOUNT_REGION.amazonaws.com/ktranslate:v2
Push the image to your docker repository by running:bash$docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_ACCOUNT_REGION.amazonaws.com/ktranslate:v2
After running these steps, you should see an output similar to the following:
$The push refers to repository [$AWS_ACCOUNT_ID.dkr.ecr.$AWS_ACCOUNT_REGION.amazonaws.com/ktranslate]$870d899ac0b0: Pushed$0a4768abd477: Pushed$b206b92a2843: Pushed$22abafd3e6c9: Pushed$1335c3725252: Pushed$7188c9350e77: Pushed$2b75f71baacd: Pushed$ba50c5652654: Pushed$80bbd31930ea: Pushed$c3d2a28a326e: Pushed$1a058d5342cc: Pushed$v2: digest: sha256:4cfe36919ae954063203a80f69ca1795280117c44947a09d678b4842bb8e4dd2 size: 2624
The Lambda function you create must reside in the same AWS Region as the S3 bucket where you store your VPC flow logs. To create a Lambda function defined as a container image, follow the following steps:
- Navigate to the Lambda service in your AWS console and select Create function.
- Select the Container image tile at the top of the screen, and:
- Name your function.
- Click Browse Images and choose the ktranslate image with the
v2tag you pushed to ECR.
- Keep the architecture on x86_64, accept the default permissions, and click Create function.
- On the landing page for your new function, select the Configuration tab, and:
- In General configuration, change the timeout value to
0 min 20 sec.
- In the Permissions section, click Execution role for your function, which will open a new tab for IAM.
- On the Permissions tab, select Attach policies and add the
AmazonS3ReadOnlyAccessto grant your function access to the S3 bucket your VPC flow logs are in.
- On the Permissions tab, select Attach policies and add the
- Back on your function's landing page, in the Environment variables section, click Edit and add the Lambda environment variables.
- On the Triggers section, click Add trigger, and:
- Select the S3 type.
- Select the bucket where you store your VPC Flow Logs.
- Choose the All object create events event type.
- Optionally, if your bucket has a custom folder in the root directory outside the
AWSLogsdirectory, you can add it in the Prefix section.
- Accept the Recursive Invocation warning and click Add.
At this point, your Lambda function is deployed and listening for new events on your S3 bucket.
It can take several minutes for data to first appear in your account as the export of VPC flow logs to S3 usually runs on a 5-minute cycle.
To confirm your Lambda function is working as expected, do one of the following:
- Go to one.newrelic.com > Explorer and you will begin to see
VPC Networkentities. You can click them and investigate the various metrics each one is sending.
- Go to one.newrelic.com > Query your data and to get a quick summary of the recent VPCs that you have flow logs from, run the following NRQL query:FROM KFlow SELECT count(*) FACET device_name WHERE provider = 'kentik-vpc'
- In your AWS Console, click the Monitor tab in your function's landing page, where you can track important metrics like invocation, error count, and success rate. You can also investigate the error logs from recent invocations.
We recommend you to add serverless monitoring from New Relic One to your new Lambda function. This way, you'll proactively monitor the health of the function and get alerts in case of problems.
All VPC flow logs exported from the
ktranslate Lambda function use the
KFlow namespace, via the New Relic Event API. Currently, these are the fields populated from this integration:
The class of program generating the traffic in this flow record. This is derived from the lowest numeric value from
The name of the VPC the traffic in this flow record is targeting, if known.
The name of the VPC this flow record was exported from.
The target IPv4 address for this flow record.
The target Autonomous System Number for this flow record.
The target Autonomous System Name for this flow record.
The target country for this flow record, if known.
The direction of flow for this record, from the point of view of the interface where the traffic was captured. Valid options are
The number of bytes transferred for ingress flow records.
The number of packets transferred for ingress flow records.
The target port for this flow record.
The source port for this flow record.
The number of bytes transferred for egress flow records.
The number of packets transferred for egress flow records.
The display name of the protocol used in this flow record, derived from the numeric IANA protocol number
This attribute is used to uniquely identify various sources of data from
The rate at which
The name of the VPC the traffic in this flow record originated from, if known.
The source IPv4 address for this flow record.
The source Autonomous System Number for this flow record.
The source Autonomous System Name for this flow record.
The source country for this flow record, if known.
The time, in Unix seconds, when the first packet of the flow was received within the aggregation interval. This might be up to 60 seconds after the packet was transmitted or received on the network interface.
The time, in Unix seconds, when this flow record was received by the New Relic Event API.
When you're configuring your AWS Lambda function, you need to set up the following environment variables:
The New Relic license key for your account
Your New Relic account ID
The New Relic datacenter region for your account. The possible values are
The rate of randomized sampling
For S3 objects with less than 100 flow records,
ktranslate will revert to a sample rate of
1 and process every record. For S3 objects with more than 100 flow records,
ktranslate will use the configured value of
KENTIK_SAMPLE_RATE, which has a default of
1000. Meaning that every record in the object has a 1:1000 change of being sampled.