Kentik for AWS

As cloud-only and hybrid cloud services become increasingly commonplace, network operators of all kinds need a unified environment within which to view and analyze the data generated by their network activities. Kentik® collects, derives, and correlates this network traffic data — flow records, BGP, GeoIP, SNMP, etc. — to enable visualization, monitoring, alerting, and analytics. The data may be collected not only from routers (including related hardware like switches) and hosts (via a software host agent) but also from your resources that are hosted by cloud service providers such as Amazon Web Services.

In this article we'll look at how to get cloud traffic and topology data (flow logs and metadata) from AWS to Kentik. The following topics will guide you through the setup process:

Notes:
- This article outlines the manual setup procedure. Kentik also supports automated setup with Terraform; see AWS Automated Setup.
- For help with any aspect of the setup workflow outlined below, please contact Kentik Customer Support.

 

About AWS Cloud Visibility

The basics of data collection from AWS are covered in the following topics:

 
top  |  section

AWS Resource Information Types

A cloud export to Kentik enables us to access two types of information from your AWS resources (e.g. VPCs or network interfaces):

  • Flow logs: Used for traffic analytics in Kentik portal modules such as Network Explorer, Insights, and Alerting. Flow logs are collected by enabling log collection on a resource and directing those logs to an S3 bucket, from which they are ingested by Kentik (see AWS Flow Log Overview).
  • Metadata: Used for topology views on the Kentik Map, including regions, availability zones, VPC IDs, AWS routing tables, security groups, etc. Metadata is collected by Kentik using AWS APIs.

The gathering of metadata may be independent from the collection of flow logs. It's not unusual for Kentik customers to concentrate flow logs into a few S3 buckets, e.g. one bucket in each AWS region for all of the resources of a given account. In such cases you'll use two distinct types of cloud exports (see Create a Kentik Cloud Export) to capture a full picture of your AWS resources:

  • Full export: For accounts that include an S3 bucket from which Kentik is exporting flow logs, the export type will be "Flow logs and metadata."
  • Metadata-only export: For accounts whose flow logs are being collected from a bucket in a different account, the export type will be "metadata-only."
 
top  |  section

AWS Flow Log Overview

In the world of Amazon Web Services, flow logs are analogous to the flow records (e.g. NetFlow, sFlow, etc.; see About Flow) generated by devices on physical networks. A flow log consists of a set of records about the flows that either originated or ended in a given Virtual Private Cloud, with each individual record made up of a set of fields giving information about a single flow.

Amazon allows you to set up a VPC Flow Log for a VPC, a subnet, or an elastic network interface (ENI), and to publish that flow log to a destination “bucket” in Amazon Simple Storage Service (S3), which is the location from which a Kentik service pulls the logs for ingest into Kentik.

AWS flow logs are ingested from an S3 bucket into Kentik via a "cloud export" that is configured on the Kentik portal's Monitor your AWS Cloud page (see AWS Cloud Setup) and managed via the Public Clouds Page (Settings » Public Clouds). By default, each VPC sending flow logs to a given bucket will be represented in Kentik as a "cloud device." For more information, see Cloud Exports and Devices.

 
top  |  section

AWS Flow Log Formats

AWS supports the export of flow logs in two distinct formats:

  • Default format (AWS standard format version 2): Each line of the log is a space-separated string with fields in the following order:
    <version> <account-id> <interface-id> <srcaddr> <dstaddr> <srcport> <dstport> <protocol> <packets> <bytes> <start> <end> <action> <log-status>
  • Custom format (AWS log format version 3 through 5): Each line is made up of one or more fields in a custom-specified order. The available fields, and the VPC flow logs version in which each field was introduced, are listed in the AWS documentation topic Available Fields.
    Note: Custom format allows Kentik to collect all of the v3 - v5 format fields needed to determine gateway types (logged) and match up routing tables (metadata), thereby enabling our workflows for mapping and other features.

Custom AWS flow logs must meet the following requirements for ingest into Kentik:

  • The following fields are required: <srcaddr>, <destaddr>, <srcport>, <dstport>, <packets>, <bytes>, <protocol>, <version>, and <start>.
  • In addition to the required fields, a custom log format must include at least six other AWS fields.
 
top  |  section

AWS Flow Log Deletion

Flow log deletion minimizes the costs associated with log data retention in the cloud. Kentik is designed to support the following approaches to flow log deletion:

  • Deletion by Kentik: Kentik's built in log ingest process results in log files being deleted within 15 minutes of being posted to an AWS S3 bucket. To utilize this approach, you must give Kentik full access to the bucket (AmazonS3FullAccess) when setting permissions for the AWS role associated with the bucket. This setting is made in the Filter policies field on the Create Role page; see Create an AWS Role.
  • Deletion by customer: If your role permissions for a given log bucket are set to AmazonS3ReadOnlyAccess then Kentik will not be able to delete log files automatically. AWS provides a number of options for deleting the contents, including log files, from a bucket; see the AWS documentation at Empty a bucket.
 
top  |  section

AWS Flow Log Documentation

For detailed information about VPC Flow Logs, please refer to AWS documentation:

 

AWS Logging Setup Overview

Kentik accesses your flow logs by pulling them from a bucket in Amazon Simple Storage Service (S3). Assuming that you already have a VPC in AWS, the following setup workflow (detailed in the topics below) will enable Kentik to access the logs for ingest into Kentik:

  1. In AWS’s S3 console, create a bucket to which logs can be published (see Create an S3 Bucket).
    Note: You may create a separate bucket for each region from which you will collect VPC flow logs (typically the most-cost effective approach), a combined bucket for all regions, or any combination in between (see Bucket Allocation).
  2. In AWS’s VPC Dashboard, configure each VPC (or subnet or interface) to publish logs to the bucket (see Configure Log Publishing).
  3. Back in the S3 console, confirm that logs are being published to the bucket (see Check Log Collection).
  4. In AWS’s Identity and Access Management (IAM) console, create a new AWS role (see Create an AWS Role) and configure it with permissions that enable AWS services associated with Kentik’s account to access resources associated with your account.
  5. To collect metadata for accounts whose flow logs are being collected in a different account, log into the IAM console for each such account to create the policies and roles needed to enable metadata collection on the primary account and secondary accounts (see Metadata-only Setup Tasks).
  6. In the Kentik portal, create a new cloud export (see Create a Kentik Cloud Export).

Note: As noted earlier, AWS allows you to set up a VPC Flow Log for a VPC, a subnet, or a network interface. Both the list above and the topics below describe these tasks using a VPC as the example. Individual steps in these tasks may vary slightly if you are instead enabling logging from a subnet or interface. For details, please refer to the AWS documentation topic Creating a Flow Log that Publishes to Amazon S3.

Cloud Exports in the Portal

Successful completion of the flow and metadata tasks listed in the overview above will have the following effect in the Kentik portal (see Cloud Exports and Devices):

  • A new cloud export will be shown as an added row in the Kentik portal’s Cloud Exports list (Admin » Public Clouds). The cloud export will represent the collection of VPCs (or subnets or interfaces) whose logs are pulled from the bucket specified during setup of the cloud export.
  • The Devices column in the Cloud Exports list will show a cloud device for each VPC, subnet, or interface sending flow logs to the bucket:
    - Each device will be named after one VPC, subnet, or interface.
    - Each flow record ingested into KDE from a given cloud device will include the device’s name, which will be the value that you can group-by and filter on using the Device dimension.
  • The routing tables, security groups and ACL information collected as metadata will enable Kentik to automatically configure mapping, e.g. in the Kentik Map, that visualizes the topology of your AWS resources.
 

AWS Logging Setup Tasks

The tasks required to set up the publishing of flow logs to an S3 bucket are covered in the following topics:

Note: For setup related to metadata, see Metadata-only Setup Tasks.

 
top  |  section

Create an S3 Bucket

We'll start the logging setup process by establishing a container into which your flow logs can be collected and accessed by the Kentik. That means creating in your account a “bucket” in Amazon Simple Storage Service, commonly referred to as “S3.” In a later stage of the process (see Configure the AWS Role) we'll enable Kentik's AWS account to access your account, so Kentik can collect metadata about your AWS resources and also the flow logs from the bucket.

Bucket Allocation

Flow logs may be exported from AWS to Kentik using either of the following approaches to allocating buckets (or a combination of the two):

  • Local buckets: Send the logs from resources in a given region to an S3 bucket in the same region. You'll work through the steps below once for each local bucket, and set that bucket as the flow log destination for all of the resources that you want represented in Kentik as a single cloud export (see Exports and Devices in AWS).
  • Centralized buckets: Send the logs generated in some enabled regions to S3 buckets that may not be in the same regions. You'll work through the steps below once to create each centralized bucket. You can then use that bucket as the flow log destination for AWS resources that may be in multiple regions.
    Note: AWS charges for the transfer of flow logs from resources in one region to a bucket another region.

Customers with multiple accounts in multiple regions typically create a local bucket for each account in each region, providing centralization within each region but avoiding extra AWS fees for transporting flow logs between regions.

Bucket Creation

To create an S3 bucket:

  1. Navigate to the Amazon S3 console at https://console.aws.amazon.com/s3/.
  2. Click the Create Bucket button, which opens the Create Bucket dialog.
  3. Enter a name for the new bucket In the Bucket Name field. You will need this name later when configuring one or more VPCs to send logs to this bucket (see Configure Log Publishing).
    Note: For bucket naming conventions, see the AWS document Bucket Naming Rules.

For Region, choose the region in which to locate the bucket (see Bucket Allocation), which is the region from which Kentik will access the collected VPC flow logs (see AWS Regions for Buckets).

  1. Click the Create button.
    Note: The default settings on the Configure Options, Set Permissions, and Review tabs of the dialog can be left as-is.
  2. Back on the Amazon S3 Console you’ll see your new bucket in your... bucket list.

AWS Regions for Buckets

In AWS, S3 buckets are located within regions. The following factors may influence your choice of bucket locations:

  • You may be able to optimize latency, minimize costs, or address regulatory requirements by choosing an AWS Region that is geographically close to you.
  • All else being equal, the best location for a bucket that will collect flow logs is likely to be same region/zone as the VPCs that will be publishing to it.

Additional information about regions and availability zones is available from AWS documentation:

 
top  |  section

Configure Log Publishing

Now that we have an S3 bucket to which we can send flow logs, we need to configure the VPC from which we want to publish those logs. To do this we’ll use AWS’s VPC Dashboard to tell the VPC to create flow logs, and we’ll set the destination of those logs to the S3 bucket that we just created.

To publish logs to a destination bucket using the VPC Dashboard:

  1. Navigate to the VPC Dashboard at https://console.aws.amazon.com/vpc/.
  2. In the sidebar at left, click on Your VPCs to go to the Your VPCs page.
  3. In the list of VPCs, find the row for the VPC from which you’d like to send flow logs to the bucket.
    Note: Select only one VPC (see VPCs and Kentik Rate Limits).
  4. Click the button at the left of the row. A new pane, which includes a tab listing existing flow logs, will appear at the bottom of the page.
  5. Click the Create flow log button at the upper right of the pane.
  6. In the resulting Create Flow Log dialog:
    - Set Filter to All (recommended for best visibility).
    - Set Destination to Send to an S3 bucket.
    - Set Log record format to Custom format, which opens a set of Log format controls.
    - Click the Select all button, which will result in the collection of AWS v2, v3, v4, and v5 fields.
  7. Set the S3 bucket ARN field to the bucket where you'd like to collect flow logs from this VPC:
    - To use the bucket you created earlier (or any other existing bucket), enter a string built from “arn:aws:s3:::” plus the name of the bucket, e.g.
    arn:aws:s3:::test-logs-bucket.
    - To use a new bucket, click Create S3 Bucket and follow the steps in Bucket Creation above.
  8. Click the Create button. The resulting Create flow log page will confirm creation of the log and state the ID assigned to the log by AWS. Click the Close button to go back to the Your VPCs page, where the new log will now be listed at the bottom of the page.
  9. To publish logs from additional VPCs to this bucket, repeat steps 3 through 8 above.

Note: To log from an interface or subnet instead of a VPC, see Creating a Flow Log that Publishes to Amazon S3.

VPCs and Kentik Rate Limits

When logs are pulled from an S3 bucket for ingest, Kentik treats the flow records from each bucket as being from one "cloud export." Each such export is analogous, for the purpose of Kentik billing plans (see About Plans), to one physical device. As a result, for AWS flow logs, the per-device rate limits in your plan are applied per bucket.

 
top  |  section

Check Log Collection

When a flow log is created (see Configure Log Publishing), AWS flow logs are published to the designated S3 bucket. The logs are collected and published from the VPC every 5 minutes, so it may take several minutes for them to start appearing in the directory.

Check Log Creation

To check if any flow logs are actually being created and published to your S3 bucket:

  1. Navigate to the Amazon S3 console at https://console.aws.amazon.com/s3/.
  2. In the list of buckets, click on the bucket to which you’ve exported flow logs (if needed, use the search field to find the bucket by name).
  3. On the Objects tab of the resulting bucket page you’ll see one or more folders. The destination log folder for the bucket will be called AWSLogs.
    Note: Flow logs are only created when there is traffic on the VPC. If there is no destination log folder it may be because there’s no traffic in your VPC.

Check Log Contents

If an AWSLogs folder exists, you can drill down into its contents to check whether the logs include flows from a given date or to see the contents of an individual log file:

  1. Click to open the AWSLogs folder. The list will now include a folder whose name is your AWS account number.
  2. Click on the folder named after your account number. The list will now show a folder named vpcflowlogs.
  3. Click on the vpcflowlogs folder. The list will now show a folder whose name corresponds to the code (e.g. us-east-2) for the availability zone (e.g. “US East (Ohio)”) in which the VPC exists.
  4. Click on the folder named for the availability zone. A set of one or more folders will appear that are each named after a year (e.g. “2023”).
  5. Click on a folder for a year, and continue clicking on folders to drill down through months and days until the list contains individual log files.
  6. Click on a file in the list to open the page corresponding to that file. The page will display information about the log file, including owner, last-modified timestamp, and size.
  7. To look at the file contents, click the Download button at upper left, which downloads a compressed (.gz) version of the file. Then uncompressed the file and open it.
 
top  |  section

Create an AWS Role

Now we have a bucket for collecting logs, and we've set a VPC to send logs to that bucket. Next you'll need to enable Kentik to export your VPC flow logs by giving permission to our services to access the needed resources in your account. You’ll do this by creating a new “role” in AWS for each AWS account from which you wish to export the logs. You'll then assign to each of those roles a set of permissions that grant access by Kentik to the following:

  • Flow logs: The S3 bucket from which Kentik will export the logs.
  • Metadata: The EC2 Describe APIs for the VPC instances from which the logs will be exported.

Note: AWS recommends creating a new role for logging rather than using an existing role.

Create Policy for Role

To create a new AWS role, you'll first need to create a policy:

  1. Log into the console of your AWS account and go to the Identity and Access Management (IAM) page at https://console.aws.amazon.com/iamv2/home#/policies
  2. Click the Create Policy button.
  3. Click the JSON tab on this page.
  4. In the resulting JSON editor, overwrite the existing JSON by pasting in the JSON shown below in AWS Policy JSON.
  5. Click the Next: Tags button at bottom right. In the subsequent step, you may choose to add tags to describe this policy.
  6. Click the Next: Review button at bottom right.
  7. Supply a name and description for the newly created policy, for example:
    - Name: Kentik-Metadata-Policy
    - Description: Policy allowing Kentik Technologies permissions to read and list all resources in the CloudWatch, Direct Connect, EC2, ELB and Network Firewall services.
  8. Click the Create Policy button at bottom right.

AWS Policy JSON

The following JSON is used on the JSON tab of the Create Policy page:

{
  "Version":"2012-10-17",
  "Statement":[
    {
      "Effect":"Allow",
      "Action":[
        "cloudwatch:ListMetrics",
        "cloudwatch:GetMetricStatistics",
        "cloudwatch:GetMetricData",
        "organizations:ListAccounts",
        "cloudwatch:Describe*",
        "directconnect:List*",
        "directconnect:describe*",
        "ec2:Describe*",
        "ec2:Search*",
        "elasticloadbalancing:Describe*",
        "network-firewall:Describe*",
        "network-firewall:List*"
      ],
      "Resource":"*"
    },
    {
      "Effect":"Allow",
      "Action":[
        "s3:Get*",
        "s3:List*"
      ],
      "Resource":[
        "arn:aws:s3:::test-logs-bucket",
        "arn:aws:s3:::test-logs-bucket/*"
      ]
    }
  ]
}

To enable Kentik to delete old flow logs (see AWS Flow Log Deletion), replace the 2nd Action array above with the following:
"Action": "s3:*".

Attach Policy to Role

Once you've created a policy, you can attach it to a new role:

  1. Log into the console of your AWS account and go to the Identity and Access Management (IAM) Roles page at https://console.aws.amazon.com/iamv2/home#/roles.
  2. On the Roles page, click the Create Role button.
  3. On the first tab of the Create Role page:
    - Select “Another AWS Account” as the type of trusted entity.
    - Enter “834693425129” as the Account ID.
    - Click the Next: Permission button at bottom right.
  4. Use the Filter policies field to find the policy that was just created, then check the checkbox to attach it to the new role.
  5. Decide which of the following permissions you'd like to use (see AWS Flow Log Deletion):
    - AmazonS3FullAccess if you want Kentik to delete the log files (this assumes that you used the modified Action JSON in AWS Policy JSON above).
    - AmazonS3ReadOnlyAccess if you want your own organization to manage the deletion of log files.
    Note: Undeleted log files may lead to additional data storage charges from AWS.
  6. Clear the Filter policies field, then use it to find the permission you chose in the previous step. Check the checkbox to attach the permission to the new role.
  7. Click the Next: Tags button at the bottom right.
  8. Click the Next: Review button at bottom right:
    - In the Role name field, enter a name for the new role.
    - In the Role description field, enter a brief description for the new role.
  9. Next, click the Create Role button at bottom right. You'll be taken back to the main Roles page. Your new role should appear at the bottom of the list of roles (if the list is long you can filter for the new role by entering its name in the filter field).
 
top  |  section

Configure the AWS Role

Once you've created a new role, you'll need to configure the "trust relationship" that allows Kentik services to access resources owned by your account:

  1. On the main Roles page in the IAM section of the AWS console, click the new role in the list.
  2. On the resulting Summary page for the role, click the Trust Relationships tab, then click the Edit Trust Relationship button to open the Edit Trust Relationship page.
  3. Paste the Trust Relationships JSON (below) into the Policy Document field, then click the Update Trust Policy button.
  4. Back on the role's Summary page, click the Copy to Clipboard icon at the right of the Role ARN field (first line of summary). Save the copied role ARN (Amazon resource name), which you'll need when you finish the log import workflow in the Kentik portal.

We’ve now created a new role, e.g. “Flow_Logs_Test” and established a trusted relationship between that role and the role “eks-ingest-node” from the AWS account 834693425129 (Kentik). This relationship gives the Kentik role permission to use a specified set of AWS services (AmazonEC2ReadOnlyAccess, plus either AmazonS3FullAccess or AmazonS3ReadOnlyAccess) on some AWS resources, in this case the S3 bucket that we created and assign to the new role.

Trust Relationships JSON

The following JSON defines the trust relationship for the AWS role that allows Kentik to export flow logs:

{
  "Version": "2008-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::834693425129:role/eks-ingest-node"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}


 

Metadata-only Setup Tasks

The tasks required to set up the collection of metadata about AWS resources are covered in the following topics:

 
top  |  section

Metadata Export Overview

Metadata allows Kentik to accurately display the inner structure of your AWS resources on the Kentik portal, e.g. in the Kentik Map. In the workflow above (AWS Logging Setup Tasks) we configured an account to enable Kentik to access both metadata and flow logs. As explained in AWS Resource Information Types, however, there are situations in which the flow logs for a given resource are being collected in a bucket elsewhere. To still capture the resource's metadata for topology mapping, we'll need to set up a metadata-only export.

 
top  |  section

Nested Roles for Metadata

As with any export from AWS, a metadata-only export depends on permissions granted to Kentik via Identity and Access Management (IAM) policies and roles. These permissions may be structured in either of the following ways:

  • Separate roles: Permissions for each account are granted individually.
  • Nested roles: Permissions are granted to a single primary account and assumed by multiple secondary accounts.

Nested roles have the advantage of enabling you to control access to the metadata from all of your AWS resources (i.e. all of your AWS accounts) through a single primary account. The metadata-only procedures below describe metadata collection using nested roles.

A nested structure in which account A is primary and accounts B and C are secondary.
 
top  |  section

Metadata-only Setup Process

As explained in Nested Roles for Metadata, the export of metadata from your AWS resources is a service — enabled by AWS APIs — for which you grant Kentik access via policies and roles. You'll start by creating a policy and role for the primary account that will be accessed directly by Kentik's own AWS account, after which you'll create policies and roles for each of the secondary accounts from which metadata will be exported via the primary account.

 
top  |  section

Create a Primary Policy

The primary account is the account from which Kentik will export metadata that originates in one or more secondary accounts (see Nested Roles for Metadata). To prepare a role for the primary account, you'll first create a policy that enables access to the primary account:

  1. Log into the AWS account that you'll use as the primary account for metadata export to Kentik.
  2. In the main navbar of the AWS console, use the Services menu or the Search field to go to IAM (Identity and Access Management).
  3. In the left sidebar choose Policies.
  4. On the Policies page, click the Create policy button at upper right.
  5. On the Create policy page, click the JSON button toward the upper right.
  6. On the Policy editor page, replace the content of the editor with the JSON in Primary Policy JSON.
  7. Click the Next button at the lower right. On the Review and Create page, the list of permissions defined in the policy should include STS, which is the Action from the JSON in the previous step.
  8. Under Policy details, enter a name and description for the new policy.
  9. Click the Create policy button, which saves the new policy and returns you to the Policies page.

Primary Policy JSON

The JSON below defines a policy that enables access to the primary account.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "sts:AssumeRole",
      "Resource": "*"
    }
  ]
}


 
top  |  section

Create a Primary Role

Now that we have a policy we'll assign it to a role in the primary account:

  1. In the left sidebar of the IAM console, choose Roles.
  2. On the Roles page, click the Create role button at upper right.
  3. On the Select trusted entity page, choose Custom trust policy.
  4. In the policy editor, replace the content of the editor with the JSON in Primary Role JSON, which enables Kentik's account (834693425129) to access the metadata in your primary account.
  5. Click the Next button at the lower right. On the Add permissions page, you'll see a Permissions policies list. Use the Search field to find the policy that you created in Create a Primary Policy.
  6. Click the checkbox at the left of the policy's row, then click the Next button.
  7. On the Name, review, and create page, enter a name and description for the new role.
  8. Click the Create role button at the lower right, which saves the new role and returns you to the Roles page.

Primary Role JSON

The JSON below assigns a trust policy to a role in the primary account, enabling access by Kentik's AWS account.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::834693425129:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {}
    }
  ]
}

 
top  |  section

Create Secondary Policies

So far we’ve provisioned our primary account so that Kentik can access its metadata via the AWS API. Next we'll provision secondary accounts to enable access to their metadata by the primary account. As with the primary account, we'll first create a policy and then create a role to which the policy can be assigned.

To create a policy for one secondary account:

  1. Log into the console for the secondary account.
  2. Navigate to the Policy editor page as described in Create a Primary Policy.
  3. On the Policy editor page, replace the content of the editor with the same policy JSON that you used for the primary account.
  4. Click the Next button to go to the Review and Create page, where you'll enter a name and description for the new policy.
  5. Click the Create policy button, which saves the new policy and returns you to the Policies page.
  6. Repeat the above steps for each secondary account whose metadata will be accessed from the primary account.
 
top  |  section

Create Secondary Roles

Now that we've created a policy for one or more secondary accounts we can create a role in each of those same accounts and — as we did for the primary account — assign the policy to the role.

To create a role for one secondary account:

  1. Log into the console for the secondary account.
  2. Navigate to the Select trusted entity page as described in Create a Primary Role.
  3. Choose Custom trust policy.
  4. In the policy editor, replace the content of the editor with the JSON in Secondary Policy JSON, replacing primary_account_id with the ID of the primary account. This will enable the primary account to access metadata in the secondary account.
  5. As described in Create a Primary Role, assign the policy from Create Secondary Policies to the new role, enter a name and description for the role, and save it by clicking the Create role button.
  6. Repeat the above steps for each secondary account whose metadata will be accessed from the primary account.

Secondary Policy JSON

The JSON below assigns a policy to a role in the secondary account, enabling access by the primary account.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::primary_account_id:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {}
    }
  ]
}


 

AWS Console Recap

So far we've covered two main areas of setup in the AWS console:

  • Flow logs: In AWS Logging Setup Tasks, we configured an export for an account from which we want to collect both flow logs and metadata: we established an S3 bucket in which the account's flow logs for a given region can be collected, we set one or more VPCs to publish logs to the bucket, we checked that the logs are actually being published, and we enabled Kentik access to the bucket.
  • Metadata: In Metadata-only Setup Tasks, we covered configuration for accounts that require a metadata-only export because their flow logs are being exported via a bucket in a different account. Using the optional "nested" approach (see Nested Roles for Metadata), we set a primary account from which Kentik can collect metadata directly, and we set permissions in one or more secondary accounts that enable the sharing of their information (routing tables, security groups, etc.) with the primary account.

Assuming that all has gone well, we're now done with setup in AWS. To complete the setup process we’ll move on to the Kentik portal.

 

Create a Kentik Cloud Export

Cloud exports are covered in the following topics:

 
top  |  section

Cloud Export Overview

The last stage of our workflow is to create a "cloud export" in Kentik. Depending on the type of the export, it may represent one of the following:

  • Flow logs and metadata: Represents all of the entities — VPCs, subnets, and interfaces — publishing to the bucket created in Create an S3 Bucket, at which point a “cloud device” will be automatically created in Kentik for each individual entity.
  • Metadata only: Enables collection of metadata from resources whose flow logs are being collected in bucket covered by a different flow log export.
 
top  |  section

Configure a Cloud Export

The creation of a new cloud export begins with getting to the Monitor your AWS Cloud page in the v4 portal:

  1. In the main navbar menu, click Settings at far left.
  2. At the top of the resulting Settings page, click on Public Clouds in the card at top right.
  3. On the resulting Public Clouds page, click the Add AWS Cloud button at top, which takes you to the Monitor your AWS Cloud page.

The next steps depend on which type of export you're trying to configure:

 
top  |  section

Metadata-only Export Fields

The following fields of the Manual Configuration tab are required to specify a metadata-only cloud export:

  • AWS Role: The ARN of the role that you created in Create a Primary Role, to which you attached the policy that grants Kentik access to your primary account.
  • AWS Region: The region in which the primary account exists.
  • Optional: Additional Metadata Roles:
    - AWS Accounts: A comma-delimited list of 12-digit account IDs, one for each of the secondary accounts from which the primary account has been granted permission to get metadata.
    - Regions used: A drop-down from which you can select all of the regions in which the accounts listed in AWS Accounts exist.
    - Role suffix: The role name appended to the ARN specified in AWS Role.
 
top  |  section

Using Your Cloud Export

At this point we’ve completed the setup process. On the Settings » Public Clouds page, you should now be able to see the changes to the Cloud Exports list that are described in Cloud Exports in the Portal. As time passes and flow records from the VPC are ingested into Kentik you’ll be able to use the names of your cloud devices as group-by and/or filter values for the Device Name dimension in Kentik queries.

© 2014- Kentik
In this article:
×