This guide provides instructions for integrating Kentik with Amazon Web Services (AWS).

An example private connection between a data center and an AWS region.
Notes:
See the Cloud Overview for an introduction to Kentik cloud setup.
For setup assistance, email support@kentik.com (see Customer Care).
Process Overview
Integrating AWS with Kentik allows you to ingest three key data types to gain full network observability:
Metadata: Infrastructure context (VPCs, Subnets, Security Groups) collected via AWS APIs.
Flow Logs: Traffic telemetry generated by VPCs, Transit Gateways, or Network Firewalls.
Metrics: Performance data for network-related cloud services.
Follow these phases to complete the setup:
Configure Logging: Enable flow logging for your target VPCs and Transit Gateways and direct them to a dedicated S3 bucket.
Grant Permissions: Create an IAM role and policy that allows Kentik to read from the S3 bucket and query AWS APIs for metadata.
Create Cloud Export: Configure a new "cloud export" in the Kentik portal to ingest data from AWS.
Link Resources: Input your AWS Role ARN and S3 bucket details to establish the connection.
Validation: Confirm that the cloud export status changes to “Active” (green checkmark) and data begins populating on the Public Clouds page of the Kentik portal.
Visualize: Use the Kentik Map and Data Explorer to monitor traffic patterns, resource utilization immediately. Gain insights for optimizing network performance and enhancing security monitoring
TIP: Kentik recommends starting with a Metadata-Only cloud export to verify connectivity before configuring high-volume flow logs.
About AWS Cloud Visibility
Kentik ingests three core telemetry types to provide complete visibility into your AWS environment:
Metadata (Context):
Collected via AWS APIs, metadata is used to build topology maps (Kentik Map), enrich flow records, and power connectivity analysis (Cloud Pathfinder).
Core Infrastructure: VPCs, Subnets, Availability Zones (AZs), and ENIs.
Routing & Security: Route tables, Network ACLs, Security Groups, and Network Firewalls
Gateways: Internet Gateways, NAT Gateways, and Transit Gateways (including attachments).
Flow Logs (Traffic)
Flow logs provide the raw traffic telemetry needed for analytics, alerting, and security forensics. Used for traffic analytics in Kentik modules like Data Explorer, Alerting, and Kentik Map.
Collection Methods: Logs are enabled on AWS resources (VPCs, Transit Gateways, etc.) and exported to an S3 bucket for Kentik to ingest.
Metrics (Performance)
Kentik collects CloudWatch metrics to track historical performance and health trends. Supported namespaces include:
Load Balancing:
AWS/ELB,AWS/ApplicationELB,AWS/NetworkELB,AWS/GatewayELBConnectivity:
AWS/VPN,AWS/DX(Direct Connect),AWS/TransitGateway,AWS/PrivateLinkEndpoints,AWS/PrivateLinkServicesCompute & Storage:
AWS/EC2,AWS/S3,AWS/NATGatewayNetwork Services:
AWS/Route53,AWS/ApiGateway,AWS/NetworkFirewall,AWS/NetworkManager
Notes: Metadata and metrics collection can be configured independently of flow logs. For optimal cost and performance, many customers consolidate flow logs from multiple VPCs into centralized S3 buckets (e.g., one per region).
AWS Flow Log Overview
AWS Flow Logs capture information about IP traffic going to and from network interfaces in your AWS environment. Similar to NetFlow or sFlow in physical networks (see About Flow), these logs provide the raw telemetry needed to visualize traffic paths, audit security, and troubleshoot connectivity issues.
VPC vs. Transit Gateway Flow Logs
Kentik can ingest flow logs from both individual VPCs and Transit Gateways. Use the table below to determine which logging strategy best fits your visibility needs.
Feature | VPC Flow Logs | Transit Gateway (TGW) Logs |
|---|---|---|
Scope | Granular: Captures IP traffic for specific ENIs, Subnets, or entire VPCs. | Aggregated: Captures all IP traffic traversing the central Transit Gateway hub. |
Visibility Level | Detailed view of internal VPC traffic (East-West) and security group efficacy. | High-level view of Inter-VPC and Hybrid (VPN/Direct Connect) traffic. |
Best For | Deep-dive troubleshooting, security forensics, and monitoring specific workloads. | Simplified monitoring for complex, multi-VPC architectures without configuring every VPC individually. |
Limitations | Requires configuration on every VPC you want to monitor. | Does not show internal traffic within a VPC (traffic that doesn't leave the VPC). |
Note: Transit Gateway logs do not replace VPC flow logs. For comprehensive observability, Kentik recommends enabling both: use TGW logs for the "big picture" of your network and VPC logs for granular analysis of critical resources.

An excerpt from an AWS VPC flow log file.
Flow Log Management
Log Record Formats
While AWS supports both default and custom flow log formats, Kentik recommends using a Custom Format to ensure comprehensive visibility.
Recommended Configuration: Select Custom format and enable all available fields (v2, v3, v4, and v5).
Why? This ensures Kentik receives not just basic traffic data (5-tuple), but also critical context like TCP flags, packet drop reasons, and region/zone information.
References: VPC Flow Log Records | Transit Gateway Flow Log Records
Flow Log Deletion
To manage S3 storage costs, you must decide whether Kentik or your internal team manages log retention.
Approach | How it works | Requirements |
|---|---|---|
Deletion by Kentik | Kentik deletes log files 15 minutes after ingestion. | Role Permission: |
Deletion by Customer | You manage the lifecycle policies in S3 to delete old logs. | Role Permission: |
WARNING: If you choose "Deletion by Customer" and fail to set up an S3 lifecycle policy, your bucket size and AWS costs will grow indefinitely.
AWS Documentation Reference
For deeper details on AWS logging mechanics, refer to:
Notes:
Cloud export setup can also be initiated from the Welcome Page during Kentik onboarding.
Each VPC sending flow logs to a bucket is represented in Kentik as a "cloud device” (see Cloud Exports and Devices).
Metadata-Only Setup (AWS)
Metadata collection allows Kentik discover your AWS infrastructure (VPCs, Subnets, Gateways) and visualize your network topology on the Kentik Map.
When to use a Metadata-Only Export:
Topology Only: You want to see your network inventory and hierarchy without ingesting traffic logs.
Split Architecture: Your flow logs are centralized in a "Log Archive" account, but you need to gather metadata from the various "Member" accounts where the resources actually live.
IAM Architecture: Standard vs. Nested
To authorize Kentik to fetch this metadata, you must configure IAM roles. Choose the strategy that matches your AWS environment.

A nested structure in which account A is primary and accounts B and C are secondary.
Strategy | Description | Best For |
|---|---|---|
Standard (Separate Roles) | You create a unique IAM role in every AWS account you want to monitor. Kentik connects to each account directly. | Single accounts or small environments with few VPCs. |
Nested (Hub-and-Spoke) | You create one Primary Role in a central account and Secondary Roles in member accounts. The Primary Role "assumes" the Secondary Roles to gather data. | Large AWS Organizations, Control Tower setups, or Managed Service Providers (MSPs). |
Create a Primary Policy
To set up a primary AWS account policy for metadata export to Kentik:
Log into the AWS account designated as the primary for metadata export.
Navigate to IAM » Policies and click Create policy.
Select the JSON tab.
Replace the editor’s content with the JSON specified in Primary Policy JSON.
Click Next and ensure the policy includes STS as an action.
Provide a name and description for the new policy under Policy details.
Click Create policy to save and exit.
Primary Policy JSON
The following JSON defines a policy to enable access to the primary AWS account:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AssumeSecondaryRoles",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "*"
},
{
"Sid": "OrgListing",
"Effect": "Allow",
"Action": [
"organizations:ListAccounts"
],
"Resource": "*"
}
]
}Create a Primary Role
To assign the created policy to a role in the primary AWS account:
Navigate to IAM » Roles and click Create role.
Select Custom trust policy.
Replace editor content with the Primary Role JSON.
CRITICAL: You must replace
<your_Company_ID>with your specific Kentik company ID (found in the portal under Settings » Licenses).Click Next. Find and select your policy and click Next.
Enter a role name and description.
Click Create role to save and return to the Roles page.
Primary Role JSON
The following JSON assigns a trust policy to a role in the primary account, enabling access by Kentik's AWS account. In the sts:ExternalId field, use your Kentik company ID which is the "Account #" on the portal's Licenses page (Settings » Licenses).
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "KentikTrust",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::834693425129:root"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "<your Company or External ID here>"
},
"ArnEquals": {
"aws:PrincipalArn": "arn:aws:iam::834693425129:role/eks-ingest-node"
}
}
}
]
}IMPORTANT:
If you prefer to use a 16-digit randomized string as your
ExternalIdinstead of your Kentik Company ID, email Kentik support at support@kentik.com.For more on
ExternalId, see Automated Configuration Options.
Create Secondary Policies
To provision secondary AWS accounts to enable access to their metadata by the primary account:
Log into the AWS console for the secondary account.
Go to the Policy editor page.
Replace editor content with the Secondary Policy JSON.
Click Next, then enter a name and description for the new policy.
Click Create policy to save and return to the Policies page.
Repeat for each secondary account.
Create Secondary Roles
To create a role in each secondary AWS account and assign the policy to the role:
Log into the AWS console for the secondary account.
Go to the Select trusted entity page.
Choose Custom trust policy.
Replace editor content with the Secondary Role JSON.
CRITICAL: Replace
primary_account_idwith the account ID of the primary account.Assign the created policy to the new role, enter a name and description, and click Create role.
Repeat for each secondary account.
Secondary Policy JSON
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudwatch:ListMetrics",
"cloudwatch:GetMetricStatistics",
"cloudwatch:GetMetricData",
"organizations:ListAccounts",
"cloudwatch:Describe*",
"directconnect:List*",
"directconnect:describe*",
"ec2:Describe*",
"ec2:Search*",
"ec2:GetManagedPrefixListEntries",
"elasticloadbalancing:Describe*",
"iam:ListAccountAliases",
"network-firewall:Describe*",
"network-firewall:List*",
"networkManager:ListCoreNetworks"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::YOUR_S3_BUCKET_NAME",
"arn:aws:s3:::YOUR_S3_BUCKET_NAME/*"
]
}
]
}Secondary Role JSON
The following JSON assigns a policy to a role in the secondary AWS account, enabling access by the primary account:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::primary_account_id:root"
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]
}Logging Setup (AWS)
After configuring IAM permissions, the next step is to set up flow log publishing from your VPCs to an S3 bucket.
Notes:
For metadata-only export setup, see Metadata-Only Setup (AWS).
The setup can be applied to VPCs, subnets, or interfaces. Refer to Creating a Flow Log that Publishes to Amazon S3 for specific details on creating flow logs for different resources.
S3 Bucket Strategy
Before creating a bucket, decide on your storage strategy. This choice impacts both management overhead and AWS data transfer costs.
Strategy | Description | Best For |
|---|---|---|
Local Buckets | Create a separate S3 bucket in each region where you have resources (e.g., | Cost & Latency: Eliminates cross-region data transfer fees. Compliance: Keeps data resident in specific geographic regions (e.g., GDPR). |
Centralized Bucket | Send logs from all regions to a single, central S3 bucket. | Simplicity: Easier to manage if you have low traffic volume or need a central archive.
|
Create an S3 Bucket
Begin by creating an S3 bucket to store flow logs, which Kentik will access later to collect metadata and logs.
To create an S3 bucket:
Navigate to the Amazon S3 console and click Create Bucket.
Enter a descriptive name (e.g.,
kentik-flow-logs-prod) and select your target region (see S3 Bucket Strategy).Scroll to "Default encryption".
Recommended: Select SSE-S3 (Server-side encryption with Amazon S3 managed keys). This requires no extra configuration.
Advanced: If you must use SSE-KMS, you will need to add specific
kms:Decryptandkms:GenerateDataKeypermissions to your IAM policy (see Advanced: S3 Bucket Encryption (KMS)).
Leave all other settings as default and click Create. The new bucket will appear in the S3 console bucket list.
Note: Refer to Bucket Naming Rules for conventions.
Advanced: S3 Bucket Encryption (KMS)
If your S3 bucket uses SSE-KMS (AWS Key Management Service) for encryption, you must grant the Kentik IAM role permission to use your specific key. Without this, Kentik will be able to "see" the files but will be "Access Denied" when trying to read/decrypt them.
Add the following block to the Statement array of the IAM policy attached to the role Kentik assumes:
{
"Sid": "KMSAccess",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:DescribeKey"
],
"Resource": "arn:aws:kms:region:account_id:key/key_id"
}
Notes:
Replace the
ResourceARN with the actual ARN of your KMS key. You can find this in the AWS Console under KMS > Customer managed keys.Kentik does not support the SSE-C encryption option for S3 buckets.
Enable VPC Flow Logs
To start sending traffic telemetry to Kentik, you must enable flow logs on your target resources:

Go to the AWS VPC Dashboard.
Click Your VPCs, select the desired VPC(s), and choose Actions > Create flow log (see VPCs and Kentik Rate Limits).
Configure Settings:
Filter: Select All (recommended for best visibility)
Destination: Select Send to an S3 bucket
S3 bucket ARN: Paste the ARN of the bucket you created in the previous steps (e.g.,
arn:aws:s3:::kentik-flow-logs).
Log Record Format (Critical):
Select Custom format
Check the box for “Select all” (or manually select all AWS v2, v3, v4, and v5 fields).
Why? Default logs miss critical context like TCP flags and packet size, which limits Kentik's ability to visualize traffic types.
Click Create flow log. The resulting page confirms log creation of the AWS-assigned log ID. Click Close to return to Your VPCs.
Important: Rate Limits
Kentik treats each S3 Bucket as a single "Device." If you have hundreds of VPCs, we recommend consolidating them into regional buckets (e.g.,
logs-us-east-1,logs-eu-west-1) rather than one global bucket. This prevents AWS S3 throttling and ensures smooth ingestion.

Verification: Check Log Status
AWS flow logs are published every 5–10 minutes. To confirm the setup:
Wait: Allow 10 minutes for the first logs to generate.
Inspect S3: Go to your S3 bucket and look for the folder path:
AWSLogs / <Account_ID> / vpcflowlogs / ...Check Files: If you see
.log.gzfiles appearing, data is successfully flowing to the bucket.Note: Logs are only generated if there is active traffic in the VPC.
Create an AWS Role
To enable Kentik to export your VPC flow logs, create a new AWS role for each AWS account from which you want export logs. AWS recommends creating a new role specifically for logging.
Note: If you are performing a Metadata-Only or Nested Account setup, follow the Metadata-Only Setup (AWS) instructions instead.
Required Permissions:
Flow logs: Grant access to the S3 bucket for log export.
Metadata: Grant access to the EC2 Describe APIs for VPC instances (see AWS Metadata Endpoints).
Metrics: Grant access to the Cloudwatch APIs (see Optional AWS Endpoints).
Create Policy for Role
To create a new AWS role, first create a policy:
Log into your AWS account and navigate to the IAM » Policies (see IAM Policies).
Click Create Policy.
Select the JSON tab and replace the existing JSON with the AWS Policy JSON below.
Click Next: Tags to optionally add descriptive tags to the policy.
Click Next: Review.
Enter a name (e.g., “Kentik-Metadata-Policy”) and description for the policy.
Click Create Policy.
AWS Policy JSON
Use the following on the JSON tab of the Create Policy page:
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"cloudwatch:ListMetrics",
"cloudwatch:GetMetricStatistics",
"cloudwatch:GetMetricData",
"organizations:ListAccounts",
"cloudwatch:Describe*",
"directconnect:List*",
"directconnect:describe*",
"ec2:Describe*",
"ec2:Search*",
"ec2:GetManagedPrefixListEntries",
"elasticloadbalancing:Describe*",
"iam:ListAccountAliases",
"network-firewall:Describe*",
"network-firewall:List*",
"networkManager:ListCoreNetworks"
],
"Resource":"*"
},
{
"Effect":"Allow",
"Action":[
"s3:Get*",
"s3:List*"
],
"Resource":[
"arn:aws:s3:::test-logs-bucket",
"arn:aws:s3:::test-logs-bucket/*"
]
}
]
}Note: To enable Kentik to delete old flow logs (see AWS Flow Log Deletion), replace the 2nd
Actionarray with"Action": "s3:*".
Attach Policy to Role
To attach a policy to a new role:
Go to IAM » Roles at AWS IAM Roles.
Click Create Role.
Select “Another AWS Account” as the trusted entity, enter “834693425129” as the Account ID, and click Next: Permissions.
Find and attach the created policy using the Filter policies field.
Choose a permission (see AWS Flow Log Deletion):
AmazonS3FullAccess: For Kentik to delete logs
AmazonS3ReadOnlyAccess: For self-managed log deletion
Note: Undeleted log files may incur additional AWS storage charges.
Attach the chosen permission to the new role.
Click Next: Tags (optional).
Click Next: Review, enter a role name and description.
Click Create Role. The new role will appear in the roles list.
Configure the AWS Role
To configure the "trust relationship" that allows Kentik to access your resources:
In AWS IAM, select the new role.
Go to the Trust Relationships tab and click Edit Trust Relationship.
Insert the AWS Role JSON below in the Policy Document field and click Update Trust Policy.
Click Copy to Clipboard to copy the Role ARN from the Summary page (for use later in the Kentik portal).
This setup creates a role with a trust relationship to “eks-ingest-node” from AWS account 834693425129, allowing Kentik to access specified AWS services.
AWS Role JSON
This JSON specifies the trust relationship for the AWS role enabling Kentik to export flow logs:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::834693425129:role/eks-ingest-node"
},
"Action": "sts:AssumeRole"
}
]
}Create a Kentik Cloud Export
A Kentik “cloud export” represents various types of data from AWS:
Metadata only: Collects metadata from resources whose flow logs are stored in a bucket managed by a different flow log export.
Flow logs and metadata: Includes all entities (VPCs, subnets, interfaces) publishing to a specific bucket (see Create an S3 Bucket). A “cloud device” is automatically created in Kentik for each entity.
Metrics: Cloud metrics history for historical telemetry analysis, trending, and alerting.
.png?sv=2022-11-02&spr=https&st=2026-02-11T23%3A38%3A28Z&se=2026-02-12T00%3A07%3A28Z&sr=c&sp=r&sig=XRNSyYqEMrfoEv8igcTBl%2BkTtth%2FXVND7CIToQfvTbM%3D)
Configuration settings for AWS Cloud Export with various observability features listed.
Initial Cloud Export Steps
To create a new AWS cloud export:
Navigate to Settings » Public Clouds and click Create Cloud Export.
Click AWS Cloud.
Under Observability Features, select the data types to collect:
Metadata collection (Required): Automatically selected.
Flow log collection:
Select to collect flow logs
Log Deletion: Allow Kentik to delete logs from your AWS bucket after export.
Note: This affects the
delete_after_readproperty in the Terraform configuration.Terraform: Choose to automatically configure the cloud export using Terraform in the next step of the wizard (see Automated Setup).
Cloud metrics history: Select to collect AWS CloudWatch metrics.
Click the green arrow to proceed.
The next steps depend on the export type:
Metadata-only Export
To set up a new AWS metadata-only cloud export, follow these steps:
Complete the Initial Cloud Export Steps while leaving Flow log collection unselected. (Selecting Cloud metrics history is optional).
Enter AWS Role (Required):
AWS Role: Enter the ARN of the role created in Create a Primary Role.
Organization Role: Select to collect metadata for all child accounts.
Click Verify Role.
Select AWS Region (Required):
Choose the region of the primary account.
Click Verify Region (fails if the AWS Role is blank or invalid).
Specify Additional Roles: Expand the Optional: Additional Metadata Roles pane to access these options:
Secondary AWS Accounts: Comma-separated list of secondary account IDs.
Regions Used: Select all regions where the listed accounts exist.
Role suffix: Role name appended to the ARN.
Click the green arrow to proceed to the final step.
Enter the cloud export name/description:
Name (Required): Specify or accept the default name for the cloud export.
Description: Provide a description or accept the default.
Select the appropriate Kentik billing plan for the cloud export from the Billing Plan dropdown.
Click Save to finalize the cloud export and return to the Public Clouds page, where the new export will be listed.
Flow Logs and Metadata Export
To set up a new AWS flow logs and metadata export, follow these steps:
Complete the Initial Cloud Export Steps while selecting Flow log collection.
Complete the first three steps of Metadata-Only Export.
Provide the S3 bucket name where flow logs will be stored.
Specify a prefix for Kentik to add to the S3 bucket name when creating the cloud export.
Click Verify S3 Bucket to ensure the bucket is accessible and correctly configured.
Click the green arrow to proceed.
Specify or accept the default name for the cloud export.
Optionally provide a description for the cloud export or accept the default.
Choose the appropriate Kentik billing plan for the cloud export from the dropdown.
Click Save to finalize the cloud export and return to the Public Clouds page, where the new export will be listed.
Automated Setup
To automatically configure your AWS setup using Terraform, follow these steps.

Follow the steps in Create a Kentik Cloud Export and select the Help me configure my provider via Terraform box.
For AWS Provider Profile Name, the default is “default”. Enter a different name if needed.
Select the AWS region from the dropdown, which populates the
regionfield in the generated configuration.Configure settings in the Select options section (see Automated Configuration Options).
Copy the generated configuration and save it as
main.tfin an empty directory where Terraform will be run.Execute the commands provided in the wizard to apply the configuration.
Click Finish to return to the Public Clouds Page, where the new cloud export will be listed under Cloud Exports.

Automated Configuration Options
When configuring AWS setup automatically via Terraform in Kentik, you can customize the following options:
Enable flow logs:For all VPCs in the selected region(s): Automatically configures flow logs for all VPCs in the selected region.
For selected VPCs in the selected region(s): Enter VPC IDs in the
vpc_id_listparameter to configure only those VPCs.
Write logs to bucket:
Every minute (recommended): Provides a higher volume of logs at a consistent rate, ideal for traffic engineering, security, and real-time monitoring.
Every 10 minutes (AWS default): Reduces log volume and AWS charges.
Automatically create necessary role in AWS account: Decide whether to automatically create the AWS role or manage it manually according to your security protocols.
Use External ID: Optionally, include an AWS external ID for Kentik to use to access your S3 bucket (see AWS doc on External ID):
This ID is known only to you and Kentik. Per AWS, its primary purpose is to avoid the confused deputy problem.
By default, your Kentik company ID is used.
This ID should also be used when creating the AWS role (see Primary Role JSON).
Note: If you prefer to use a 16-digit randomized string as your External ID, contact Kentik support at support@kentik.com.
Cloud Export Name Prefix: Specify a prefix to add to the Kentik cloud export name for easy identification.
S3 Bucket Prefix: Specify a prefix to add to Kentik-created S3 bucket name.
IAM Role Prefix: Specify a prefix to add to the Kentik-created IAM role.
Billing Plan: Select the appropriate Kentik billing plan for the cloud export.
Notes:
Prefix fields help in identifying and managing your cloud exports more effectively.
Different values can be used for each prefix field to suit your organizational needs.
Using Your Cloud Export
Once the setup process is complete, you can view and utilize your cloud export in Kentik:
Cloud Exports List
Go to Settings » Public Clouds to see the updated list of cloud exports.
A new cloud export will be listed, representing the VPCs, transit gateways, subnets, or interfaces whose logs are pulled from the specified bucket.
Devices Column
Each VPC, transit gateway, subnet, or interface sending flow logs is listed as a cloud device.
Devices are named after their respective VPC, transit gateway, subnet, or interface.
These names can be used as group-by and filter values in Kentik queries using the Device Name dimension.
Metadata and Mapping
The collected metadata, such as routing tables, security groups, and ACLs, enables Kentik to automatically map and visualize the topology of your AWS resources in the Kentik Map.
.png?sv=2022-11-02&spr=https&st=2026-02-11T23%3A38%3A28Z&se=2026-02-12T00%3A07%3A28Z&sr=c&sp=r&sig=XRNSyYqEMrfoEv8igcTBl%2BkTtth%2FXVND7CIToQfvTbM%3D)
The Public Clouds page lists your AWS resources as “cloud exports”, each with a service status overview, highlighted issues, and device group details.
AWS Endpoints Lists
Kentik needs permission to access selected AWS endpoints on your behalf in order to collect metadata and metrics, as detailed in the following lists.
AWS Metadata Endpoints
ec2:
describeAvailabilityZonesdescribeCustomerGatewaysdescribeFlowLogsdescribeInternetGatewaysdescribeInstancesdescribeNatGatewaysdescribeNetworkAclsdescribeNetworkInterfacesdescribeManagedPrefixListsdescribePrefixListsdescribeRouteTablesdescribeSecurityGroupsdescribeSubnetsdescribeTransitGatewaysdescribeTransitGatewayAttachmentsdescribeTransitGatewayVpcAttachmentsdescribeTransitGatewayRouteTablesdescribeTransitGatewayConnectsdescribeTransitGatewayConnectPeersdescribeVpcsdescribeVpcEndpointsdescribeVpcPeeringConnectionsdescribeVpnConnectionsdescribeVpnGatewaysdescribeManagedPrefixListsdescribeTransitGatewayRouteTablessearchTransitGatewayRoutesGetManagedPrefixListEntries
directconnect:
describeDirectConnectGatewaysdescribeVirtualInterfacesdescribeLagsdescribeConnections
elb:
describeLoadBalancers
iam:
ListAccountAliases
Network Manager (core network metadata):
listCoreNetworksgetCoreNetworkgetCoreNetworkPolicylistAttachmentsgetNetworkRoutes
network-firewall:
listFirewallsdescribeFirewalllistFirewallPoliciesdescribeFirewallPolicydescribeRuleGroup
Optional AWS Endpoints
To optionally get a list of accounts in an AWS organization, Kentik may need to access the following additional endpoints:
organization:
listAccounts
cloudwatch:
cloudwatch:ListMetricscloudwatch:GetMetricStatisticscloudwatch:GetMetricData
sts:
sts:AssumeRole

