Kentik for AWS

Prev Next

This guide provides instructions for integrating Kentik with Amazon Web Services (AWS).

An example private connection between a data center and an AWS region

Notes:

Process Overview

Integrating AWS with Kentik involves setting up both the AWS environment and the Kentik portal to collect the following:

  • Metadata (using Describe API calls from your AWS accounts or organization)

  • Flow logs generated by your Virtual Private Clouds (VPCs), Transit Gateways, or Network Firewalls

  • Metrics about your network-related cloud services

Here's how the process works:

  1. Logging Setup (AWS):

    1. Set up metadata and flow logging for desired VPCs and Transit Gateways to an S3 bucket

    2. Create and grant access to the S3 bucket

  2. Create a Kentik Cloud Export: Configure a new "cloud export" in the Kentik portal to ingest data from AWS.

  3. Use the Kentik portal to:

    1. Monitor your AWS network traffic

    2. Visualize resource utilization

    3. Gain insights for optimizing network performance and enhancing security monitoring

  1. Note: As a first step, we recommend creating a metadata-only cloud export with the necessary metadata permissions. Then, add flow ingestion to the export (or create a separate one) by setting up flow logs to S3 and configuring necessary Kentik permission to that bucket. See Metadata-only Setup (AWS) for instructions.

About AWS Cloud Visibility

Kentik collects the following data from AWS VPCs, subnets, and network interfaces:

  • Metadata: Used for flow enrichment, Cloud Pathfinder (for connectivity analysis), and topology views on the Kentik Map. The following metadata is collected via AWS APIs:

    • VPCs, subnets, network ACLs, security groups, AZs

    • ENIs for EC2 instances, gateways (including TGW), attachments

    • DirectConnects

    • Load balancers

    • CloudWAN / Network Manager

    • Network Firewalls

  • Flow Logs: Used for traffic analytics in Kentik modules like Data Explorer, Insights, Alerting, and Kentik Map. Collected by enabling log collection on AWS resources and directing them to an S3 bucket for Kentik ingestion (see AWS Flow Log Overview).

  • Metrics: Cloud metrics history for historical telemetry analysis, trending, and alerting from the following namespaces:

    • AWS/DX

    • AWS/EC2

    • AWS/ELB

    • AWS/NATGateway

    • AWS/NetworkELB

    • AWS/PrivateLinkEndpoints

    • AWS/TransitGateway

    • AWS/ApiGateway

    • AWS/ApplicationELB

    • AWS/GatewayELB

    • AWS/NetworkFirewall

    • AWS/NetworkManager

    • AWS/PrivateLinkServices

    • AWS/Route53

    • AWS/S3

    • AWS/VPN

Notes:

  • Metadata and metrics collection can be independent of flow log collection.

  • Kentik customers often consolidate flow logs into a few S3 buckets (e.g., one per AWS region).

AWS Flow Log Overview

AWS flow logs are similar to flow records like NetFlow or sFlow (see About Flow) from physical networks. Each flow log contains records about network flows that originate or end in an AWS resource.

VPC Flow Logs vs. Transit Gateway Flow Logs

AWS supports the collection of both VPC flow logs and Transit Gateway flow logs, as compared here:

  • VPC Flow Logs: Capture IP traffic going to and from network interfaces within a specific Virtual Private Cloud (VPC), subnet, or Elastic Network Interface (ENI).

    • Provide granular visibility into traffic within a VPC and its associated components.

    • Ideal for understanding internal VPC traffic patterns, security group efficacy, and general network troubleshooting within a defined VPC boundary.

  • Transit Gateway Flow Logs: Capture IP traffic that traverses a Transit Gateway or Transit Gateway Attachment (a central hub connecting multiple VPCs and on-prem networks).

    • Offer a unified view for monitoring and troubleshooting inter-VPC and hybrid cloud traffic.

    • Ideal for complex, multi-VPC architectures.

    • No need to collect flow logs from every attached VPC individually.

Transit Gateway flow logs do not replace VPC flow logs for granular insights into traffic within individual VPCs. Kentik can ingest both types of flow logs to provide comprehensive network visibility across your AWS environment.

An excerpt from an AWS VPC flow log file

AWS Flow Log Formats

The AWS VPC and Transit Gateway flow log records have different formats. See the AWS documentation on VPC Flow Log Records and Transit Gateway Flow Log Records for details.

AWS Flow Log Deletion

Kentik supports two flow log deletion approaches:

  • Deletion by Kentik:

    • Kentik automatically deletes log files within 15 minutes of being posted to the AWS S3 bucket.

    • Requires AmazonS3FullAccess permissions for the associated AWS role (see Create an AWS Role).

  • Deletion by Customer:

    • If the role has AmazonS3ReadOnlyAccess, Kentik cannot delete logs.

    • You must manually delete bucket contents (see the AWS documentation on Empty a bucket).

AWS Flow Log Documentation

See this AWS documentation for more information on VPC Flow Logs:

Notes:

  • Cloud export setup can also be initiated from the Welcome Page during Kentik onboarding.

  • Each VPC sending flow logs to a bucket is represented in Kentik as a "cloud device” (see Cloud Exports and Devices).

Metadata-only Setup (AWS)

Metadata enables Kentik to display your AWS resource structure on the Kentik portal (e.g., Kentik Map). A metadata-only export is needed when flow logs are stored in a different bucket than the one used for metadata collection.

Nested Roles for Metadata

Kentik uses IAM policies and roles for metadata-only exports as follows:

  • Separate Roles: Permissions are granted individually for each account.

  • Nested Roles: Permissions are granted to a primary account and assumed by multiple secondary accounts, allowing centralized control of metadata access.

A nested structure in which account A is primary and accounts B and C are secondary.

Create a Primary Policy

To set up a primary account policy for metadata export to Kentik:

  1. Log into the AWS account designated as the primary account for metadata export.

  2. Use the Services menu or Search field to navigate to IAM.

  3. In the IAM sidebar, select Policies.

  4. Click Create policy.

  5. Select the JSON tab.

  6. Replace the editor’s content with the JSON specified in Primary Policy JSON.

  7. Click Next and ensure the policy includes STS as an action.

  8. Provide a name and description for the new policy under Policy details.

  9. Click Create policy to save and return to the Policies page.

Primary Policy JSON

The following JSON defines a policy to enable access to the primary account:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "sts:AssumeRole",
      "Resource": "*"
    },
    {
      "Sid": "Statement1",
      "Effect": "Allow",
      "Action": [
        "organizations:ListAccounts",
      ],
      "Resource": "*"
    }
  ]
}

Create a Primary Role

To assign the created policy to a role in the primary account:

  1. Go to IAM console » Roles.

  2. Click Create role.

  3. Select Custom trust policy.

  4. Replace editor content with the Primary Role JSON.

  5. Click Next. Find and select your policy.

  6. Click Next.

  7. Enter a role name and description.

  8. Click Create role to save and return to the Roles page.

Primary Role JSON

The following JSON assigns a trust policy to a role in the primary account, enabling access by Kentik's AWS account.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Statement1",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::834693425129:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "sts:ExternalId": "98837"
        },
        "ArnEquals": {
          "aws:PrincipalArn": "arn:aws:iam:834693425129:role/eks-ingest-node"
        }
      }
    }
  ]
}

Create Secondary Policies

To provision secondary accounts to enable access to their metadata by the primary account:

  1. Log into the console for the secondary account.

  2. Go to the Policy editor page.

  3. Replace editor content with the primary account’s policy JSON.

  4. Click Next, then enter a name and description for the new policy.

  5. Click Create policy to save and return to the Policies page.

  6. Repeat for each secondary account.

Create Secondary Roles

To create a role in each secondary account and assign the policy to the role:

  1. Log into the console for the secondary account.

  2. Go to the Select trusted entity page.

  3. Choose Custom trust policy.

  4. Replace editor content with the Secondary Policy JSON, substituting primary_account_id with the primary account ID.

  5. Assign the created policy to the new role, enter a name and description, and click Create role.

  6. Repeat for each secondary account.

Secondary Policy JSON

The following JSON assigns a policy to a role in the secondary account, enabling access by the primary account:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::primary_account_id:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {}
    }
  ]
}

Logging Setup (AWS)

To set up flow log publishing to an S3 bucket in AWS, follow these topics.

Notes:

Create an S3 Bucket

Begin by creating an S3 bucket to store flow logs, which Kentik will access later to collect metadata and logs.

Bucket Allocation

Choose between:

  • Local buckets: Store logs in the same region as the resources. Create a bucket per region (see Exports and Devices in AWS).

  • Centralized buckets: Store logs from multiple regions in a single bucket. Work through the steps below for each centralized bucket.

    Note: Use local buckets for multiple accounts across regions to avoid extra AWS fees.

Bucket Creation

To create an S3 bucket:  

  1. Navigate to the Amazon S3 console.

  2. Click Create Bucket and follow the prompts to set up your bucket.

  3. Enter a bucket name and select a region (see AWS Regions for Buckets).

  4. Click Create. Default settings in the Configure Options, Set Permissions, and Review tabs can remain unchanged. The new bucket will appear in the S3 console bucket list.

    Note: Refer to Bucket Naming Rules for conventions.

Bucket Encryption

Kentik supports the following options for encrypting AWS S3 buckets:

  • S3-SSE: This is the default AWS encryption option and does not require configuring any additional permissions (other than the defaults mentioned in this article). S3-SSE is the most common approach among Kentik customers.

  • SSE-KMS: This option uses the AWS Key Management System (KMS) to manage encryption keys and requires these additional permissions for Kentik to access the given keys:

    • kms:Decrypt

    • kms:GenerateDataKey

This example SSE-KMS policy can be added to the default policy or configured as standalone:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "VisualEditor0",
      "Effect": "Allow",
      "Action": [
        "kms:Decrypt",
        "kms:GenerateDataKey"
      ],
      "Resource": "arn:aws:kms::904358996938:key/"
    }
  ]
}

Note: Kentik does not support the SSE-C encryption option for S3 buckets.

AWS Regions for Buckets

Factors to consider:

  • Latency & Costs: Choose a nearby region to optimize latency and minimize costs.

  • Regulatory Requirements: Consider compliance needs.

  • Flow Logs: Ideally, place the bucket in the same region/zone as the VPCs publishing to it.

For additional information refer to the AWS documentation:

Configure Log Publishing

To configure VPC flow logs to an S3 bucket:

  1. Go to the AWS VPC Dashboard.

  2. Click Your VPCs and select the VPC for log publishing (see VPCs and Kentik Rate Limits).

  3. Select the VPC row and click Create flow log.

  4. Set Log Options:

    1. Filter: All (recommended for best visibility)

    2. Destination: Send to an S3 bucket

    3. Log record format: Custom format and select all AWS v2, v3, v4, and v5 fields

  5. Specify the S3 bucket ARN (e.g., arn:aws:s3:::test-logs-bucket). Optionally, create a new bucket (see Bucket Creation).

  6. Click Create. The resulting Create flow log page confirms log creation of the AWS-assigned log ID. Click Close to return to Your VPCs.

  7. Repeat for other VPCs.

Note: For logging from an interface or subnet, see Creating a Flow Log that Publishes to Amazon S3.

VPCs and Kentik Rate Limits

Kentik considers each S3 bucket as a single "cloud export" (similar to a physical device for billing, see About Plans). AWS flow logs are subject to per-device rate limits based on the number of buckets.

Check Log Collection

AWS flow logs are published to the designated S3 bucket every 5 minutes, so it may take several minutes for them to start appearing in the directory.

Check Log Creation

To verify if flow logs are being created and published to your S3 bucket:

  1. Go to the Amazon S3 console.

  2. Select the bucket where flow logs are exported, using the search as necessary.

  3. Check for the AWSLogs folder on the Objects tab.

    Note: Logs are only generated when there is traffic in the VPC.

Check Log Contents

To examine the contents of flow logs in the AWSLogs folder:

  1. Open the AWSLogs folder.

  2. Navigate through the folders: account number » vpcflowlogs » availability zone » year » month » day

  3. Open a log file to see details like owner, last-modified timestamp, and size.

  4. Click Download to get a compressed (.gz) version of the file. Uncompress and open it to view the contents.

Create an AWS Role

To enable Kentik to export your VPC flow logs, create a new AWS role for each AWS account from which you want export logs. AWS recommends creating a new role specifically for logging.

Required Permissions:

  • Flow logs: Grant access to the S3 bucket for log export.

  • Metadata: Grant access to the EC2 Describe APIs for VPC instances (see AWS Metadata Endpoints).

  • Metrics: Grant access to the Cloudwatch APIs (see Optional AWS Endpoints).

Create Policy for Role

To create a new AWS role, first create a policy:

  1. Log into your AWS account and navigate to the IAM » Policies (see IAM Policies).

  2. Click Create Policy.

  3. Select the JSON tab and replace the existing JSON with the AWS Policy JSON below.

  4. Click Next: Tags to optionally add descriptive tags to the policy.

  5. Click Next: Review.

  6. Enter a name (e.g., “Kentik-Metadata-Policy”) and description for the policy.

  7. Click Create Policy.

AWS Policy JSON

Use the following on the JSON tab of the Create Policy page:

{
  "Version":"2012-10-17",
  "Statement":[
    {
      "Effect":"Allow",
      "Action":[
        "cloudwatch:ListMetrics",
        "cloudwatch:GetMetricStatistics",
        "cloudwatch:GetMetricData",
        "organizations:ListAccounts",
        "cloudwatch:Describe*",
        "directconnect:List*",
        "directconnect:describe*",
        "ec2:Describe*",
        "ec2:Search*",
        "ec2:GetManagedPrefixListEntries",
        "elasticloadbalancing:Describe*",
        "network-firewall:Describe*",
        "network-firewall:List*"
      ],
      "Resource":"*"
    },
    {
      "Effect":"Allow",
      "Action":[
        "s3:Get*",
        "s3:List*"
      ],
      "Resource":[
        "arn:aws:s3:::test-logs-bucket",
        "arn:aws:s3:::test-logs-bucket/*"
      ]
    }
  ]
}

Note: To enable Kentik to delete old flow logs (see AWS Flow Log Deletion), replace the 2nd Action array with "Action": "s3:*".

Attach Policy to Role

To attach a policy to a new role:

  1. Go to IAM » Roles at AWS IAM Roles.

  2. Click Create Role.

  3. Select “Another AWS Account” as the trusted entity, enter “834693425129” as the Account ID, and click Next: Permissions.

  4. Find and attach the created policy using the Filter policies field.

  5. Choose a permission (see AWS Flow Log Deletion):

    1. AmazonS3FullAccess: For Kentik to delete logs

    2. AmazonS3ReadOnlyAccess: For self-managed log deletion

      Note: Undeleted log files may incur additional AWS storage charges.

  6. Attach the chosen permission to the new role.

  7. Click Next: Tags (optional).

  8. Click Next: Review, enter a role name and description.

  9. Click Create Role. The new role will appear in the roles list.

Configure the AWS Role

To configure the "trust relationship" that allows Kentik to access your resources:

  1. In AWS IAM, select the new role.

  2. Go to the Trust Relationships tab and click Edit Trust Relationship.

  3. Insert the AWS Role JSON below in the Policy Document field and click Update Trust Policy.

  4. Click Copy to Clipboard to copy the Role ARN from the Summary page (for use later in the Kentik portal).

This setup creates a role with a trust relationship to “eks-ingest-node” from AWS account 834693425129, allowing Kentik to access specified AWS services.

AWS Role JSON

This JSON specifies the trust relationship for the AWS role enabling Kentik to export flow logs:

{
  "Version": "2008-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::834693425129:role/eks-ingest-node"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Create a Kentik Cloud Export

A Kentik “cloud export” represents various types of data from AWS:

  • Metadata only: Collects metadata from resources whose flow logs are stored in a bucket managed by a different flow log export.

  • Flow logs and metadata: Includes all entities (VPCs, subnets, interfaces) publishing to a specific bucket (see Create an S3 Bucket). A “cloud device” is automatically created in Kentik for each entity.

  • Metrics: Cloud metrics history for historical telemetry analysis, trending, and alerting.

Cloud export configuration settings for AWS, GCP, Azure, and OCI with observability features.

Configuration settings for AWS Cloud Export with various observability features listed.

Initial Cloud Export Steps

To create a new AWS cloud export:

  1. Navigate to Settings » Public Clouds.

  2. Click Create Cloud Export.

  3. Click AWS Cloud.

  4. Under Observability Features, select the data types to collect:

    1. Metadata collection (Required): Automatically selected.

    2. Flow log collection:

      1. Select to collect flow logs

      2. Log Deletion: Allow Kentik to delete logs from your AWS bucket after export.

        Note: This affects the delete_after_read property in the Terraform configuration.

      3. Terraform: Choose to automatically configure the cloud export using Terraform in the next step of the wizard (see Automated Setup).

    3. Cloud metrics history: Select to collect AWS CloudWatch metrics.

  5. Click the green arrow to proceed.

The next steps depend on the export type:

Metadata-only Export

To set up a new AWS metadata-only cloud export, follow these steps:

  1. Complete the Initial Cloud Export Steps while leaving Flow log collection unselected. (Selecting Cloud metrics history is optional).

  2. Enter AWS Role (Required):

    1. AWS Role: Enter the ARN of the role created in Create a Primary Role.

    2. Organization Role: Select to collect metadata for all child accounts.

    3. Click Verify Role.

  3. Select AWS Region (Required):

    1. Choose the region of the primary account.

    2. Click Verify Region (fails if the AWS Role is blank or invalid).

  4. Specify Additional Roles: Expand the Optional: Additional Metadata Roles pane to access these options:

    1. Secondary AWS Accounts: Comma-separated list of secondary account IDs.

    2. Regions Used: Select all regions where the listed accounts exist.

    3. Role suffix: Role name appended to the ARN.

  5. Click the green arrow to proceed to the final step.

  6. Enter the cloud export name/description:

    1. Name (Required): Specify or accept the default name for the cloud export.

    2. Description: Provide a description or accept the default.

  7. Select the appropriate Kentik billing plan for the cloud export from the Billing Plan dropdown.

  8. Click Save to finalize the cloud export and return to the Public Clouds page, where the new export will be listed.

Flow Logs and Metadata Export

To set up a new AWS flow logs and metadata export, follow these steps:

  1. Complete the Initial Cloud Export Steps while selecting Flow log collection.

  2. Complete the first three steps of Metadata-only Export.

  3. Provide the S3 bucket name where flow logs will be stored.

  4. Specify a prefix for Kentik to add to the S3 bucket name when creating the cloud export.

  5. Click Verify S3 Bucket to ensure the bucket is accessible and correctly configured.

  6. Click the green arrow to proceed.

  7. Specify or accept the default name for the cloud export.

  8. Optionally provide a description for the cloud export or accept the default.

  9. Choose the appropriate Kentik billing plan for the cloud export from the dropdown.

  10. Click Save to finalize the cloud export and return to the Public Clouds page, where the new export will be listed.

Automated Setup

To automatically configure your AWS setup using Terraform, follow these steps.

Options for flow log collection and AWS log management settings are displayed.

  1. Follow the steps in Create a Kentik Cloud Export and select the Help me configure my provider via Terraform box.

  2. For AWS Provider Profile Name, the default is “default”. Enter a different name if needed.

  3. Select the AWS region from the dropdown, which populates the region field in the generated configuration.

  4. Configure settings in the Select options section (see Automated Configuration Options).

  5. Copy the generated configuration and save it as main.tf in an empty directory where Terraform will be run.

  6. Execute the commands provided in the wizard to apply the configuration.

  7. Click Finish to return to the Public Clouds Page, where the new cloud export will be listed under Cloud Exports.

Configuration settings for AWS provider profile, including region and logging options.

Automated Configuration Options

When configuring Terraform in Kentik, you can customize the following options:

  • Enable flow logs:

    • For all VPCs in the selected region(s): Automatically configures flow logs for all VPCs in the selected region.

    • For selected VPCs in the selected region(s): Enter VPC IDs in the vpc_id_list parameter to configure only those VPCs.

  • Write logs to bucket:

    • Every minute (recommended): Provides a higher volume of logs at a consistent rate, ideal for traffic engineering, security, and real-time monitoring.

    • Every 10 minutes (AWS default): Reduces log volume and AWS charges.

  • Automatically create necessary role in AWS account: Decide whether to automatically create the AWS role or manage it manually according to your security protocols.

  • Use External ID: Includes a Kentik-provided ID for third-party access to an S3 bucket, (see AWS documentation on External ID).

  • Cloud Export Name Prefix: Specify a prefix to add to the Kentik cloud export name for easy identification.

  • S3 Bucket Prefix: Specify a prefix to add to Kentik-created S3 bucket name.

  • IAM Role Prefix: Specify a prefix to add to the Kentik-created IAM role.

  • Billing Plan: Select the appropriate Kentik billing plan for the cloud export.

Notes:

  • Prefix fields help in identifying and managing your cloud exports more effectively.

  • Different values can be used for each prefix field to suit your organizational needs.

Using Your Cloud Export

Once the setup process is complete, you can view and utilize your cloud export in Kentik:

  • Cloud Exports List

    • Go to Settings » Public Clouds to see the updated list of cloud exports.

    • A new cloud export will be listed, representing the VPCs, transit gateways, subnets, or interfaces whose logs are pulled from the specified bucket.

  • Devices Column

    • Each VPC, transit gateway, subnet, or interface sending flow logs is listed as a cloud device.

    • Devices are named after their respective VPC, transit gateway, subnet, or interface.

    • These names can be used as group-by and filter values in Kentik queries using the Device Name dimension.

  • Metadata and Mapping

    • The collected metadata, such as routing tables, security groups, and ACLs, enables Kentik to automatically map and visualize the topology of your AWS resources in the Kentik Map.

AWS service status overview with highlighted issues and device group details.

The Public Clouds page lists your AWS resources as “cloud exports”, each with a service status overview, highlighted issues, and device group details.

AWS Endpoints Lists

Kentik needs permission to access selected AWS endpoints on your behalf in order to collect metadata and metrics, as detailed in the following lists.

AWS Metadata Endpoints

ec2:

  • describeAvailabilityZones

  • describeCustomerGateways

  • describeFlowLogs

  • describeInternetGateways

  • describeInstances

  • describeNatGateways

  • describeNetworkAcls

  • describeNetworkInterfaces

  • describeManagedPrefixLists

  • describePrefixLists

  • describeRouteTables

  • describeSecurityGroups

  • describeSubnets

  • describeTransitGateways

  • describeTransitGatewayAttachments

  • describeTransitGatewayVpcAttachments

  • describeTransitGatewayRouteTables

  • describeTransitGatewayConnects

  • describeTransitGatewayConnectPeers

  • describeVpcs

  • describeVpcEndpoints

  • describeVpcPeeringConnections

  • describeVpnConnections

  • describeVpnGateways

  • describeManagedPrefixLists

  • describeTransitGatewayRouteTables

  • searchTransitGatewayRoutes

  • GetManagedPrefixListEntries

directconnect:

  • describeDirectConnectGateways

  • describeVirtualInterfaces

  • describeLags

  • describeConnections

elb:

  • describeLoadBalancers

iam:

ListAccountAliases

Network Manager (core network metadata):

  • listCoreNetworks

  • getCoreNetwork

  • getCoreNetworkPolicy

  • listAttachments

  • getNetworkRoutes

network-firewall:

  • listFirewalls

  • describeFirewall

  • listFirewallPolicies

  • describeFirewallPolicy

  • describeRuleGroup

Optional AWS Endpoints

To optionally get a list of accounts in an AWS organization, Kentik may need to access the following additional endpoints:

organization:

  • listAccounts

cloudwatch:

  • cloudwatch:ListMetrics

  • cloudwatch:GetMetricStatistics

  • cloudwatch:GetMetricData

sts:

  • sts:AssumeRole