CH logo® Knowledge Base
Contents Search


KDE Overview

This article covers the following topics related to querying the Kentik Data Engine (KDE), a datastore developed by Kentik to store current and historical flow data:

- For information about the KDE main tables, including the columns from which Kentik Detect derives filters and group-by dimensions, see KDE Tables.
- For information about querying Kentik Data Engine see Querying KDE.



About Kentik Data Engine

Kentik Detect utilizes a custom-built scalable, columnar datastore called Kentik Data Engine. KDE ingests the flow data received from customers (e.g. NetFlow or sFlow), correlates it with additional data such as GeoIP and BGP, and stores it (redundantly) in flow records that can be queried from the Kentik Detect portal, via Kentik APIs (see Query API), or via a fully ANSI SQL compliant individual PostgreSQL interface.

KDE keeps a discrete set of databases for the flow records of each customer. These databases are made up of “Main Tables” for the flow records and associated data received for each device (router or host) as well as for supplemental data that is derived from the flow data. The columns of those Main Tables contain the data that is queried by Kentik Detect. For a categorized list of queryable columns, see Main Table Schema.



Subqueries, Slices, and Shards

Note: This topic discusses Kentik's implementation of SQL as it relates to Kentik Detect's querying of the Kentik Data Engine and also to user querying of KDE via the Query SQL Method of the V5 Query API. Direct user querying of KDE using SQL (e.g. with a psql client) is deprecated. For additional information please contact Kentik support.

Kentik Data Engine presents to the user as a traditional (non-clustered) SQL database while using a scalable distributed cluster architecture under the hood. This architecture is built around key concepts that include subqueries, slices, shards, masters, and workers.

As a given customer’s flow records and associated data are ingested into KDE, the data from each device is stored in two parallel “main tables” (see KDE Tables), one for a full resolution dataseries and another for a “fast” dataseries, which is optimized for faster execution of queries covering long timespans (see Resolution Overview).

KDE main tables are each divided into logical “slices” of one minute (for Full dataseries) or one hour (for Fast dataseries). Every slice from a given device’s main table corresponds to at least two identical “shards,” each of which is an on-disk physical block in KDE. For high availability, the shards for a given slice are always stored on different machines (typically in different racks). The machines that store the shards are referred to as “workers.”

The KDE subquery process is handled by both workers and “masters.” For a given worker to handle a given subquery, a shard corresponding to the time-slice covered by that subquery must reside on that worker. Masters are responsible for splitting a query into subqueries that each correspond to a slice, for identifying (via metadata lookup) which workers have access to the shards needed for a given subquery, and for assigning each subquery to a worker. The worker generates a result set from data in a shard and returns that result to the issuing master, which reassembles them into final results that are returned to the top-level PostgreSQL interface as SQL/MED foreign data (see This approach is part of what enables Data Engine to achieve exceptional performance with a SQL interface even when querying massive amounts of data.

The fact that queries to Data Engine will be broken into subqueries means that there are some special functions required to properly interact with the underlying architecture. These functions can affect the way that queries are written in the following situations:

Note: Kentik Data Engine does not currently support use of NOT IN in a WHERE clause.



KDE Resolution

At ingest, Kentik creates two fully independent dataseries that are stored in the KDE, one at full resolution and another optimized for faster execution of queries that cover long timespans:

  • Full dataseries: Includes every flow record sent by a given customer to Kentik (within applicable limits of the governing service agreement).
  • Fast dataseries: Includes only a subset of the flow records, enabling faster response to queries spanning 24 hours or more.

The use of parallel dataseries makes it feasible to run long-timespan queries that return in seconds without compromising the detail of shorter timespan queries. For further information about how the Fast dataseries is derived, how the query timespan determines the default dataseries for a query, and how to override the default, see Resolution Overview.



Table Time-slicing

Each Main Table and All Devices table in KDE is subdivided into time-slices that allow data to be spread across KDE for distributed processing of subqueries (see Subqueries, Slices, and Shards):

  • For tables in the Full dataseries (see KDE Resolution) the ingestion of flow data into a main table results in one row for each flow record, and the duration of the time-slice is one, five, or 10 minutes depending on the width of the query; see Time Rounding.
  • For Fast dataseries tables the time-slice is one hour (for information about how Fast data is derived, see About Dataseries Resolution).



Populating BGP Fields

When a flow is ingested into a device’s KDE main table, the way the flow record’s BGP fields are populated depends on the following three conditions at time-of-ingest:

  1. Is the device enabled for peering with Kentik Detect?
  2. Does the BGP routing table obtained by peering with the device include an IP route for the received flow?
  3. Is the AS path (list of ASNs) in the IP route empty or not-empty?

The following table shows how the conditions above determine the values written to the BGP-related fields in a given device’s main Data Engine table:

Conditions: State A State B State C State D
1. Peering Enabled Enabled Enabled Not enabled
2. IP route for flow Yes Yes No N.A.
3. AS path Non-empty string Empty string N.A. N.A.
The BGP AS path in the route. The ASN of the peering device. Empty string Empty string
The last ASN in BGP AS path. The ASN of the peering device. The ASN of the corresponding (dst or src) IP address. The ASN of the corresponding (dst or src) IP address.
The AS name corresponding to the last ASN in BGP AS path. The AS name corresponding to the ASN of the peering device. The AS name corresponding to the (dst or src) ASN. The AS name corresponding to the (dst or src) ASN.
The first ASN in BGP AS path. The ASN of the peering device. 0 0
The name corresponding to the first ASN in BGP AS path. The AS name corresponding to the ASN of the peering device. Empty string Empty string
The ASN in the second position of the BGP AS path. 0 0 0
The name corresponding to the ASN in the second position of the BGP AS path. Empty string Empty string Empty string
The ASN in the third position of the BGP AS path. 0 0 0
The name corresponding to the ASN in the third position of the BGP AS path. Empty string Empty string Empty string
BGP community of the router. BGP community of the route. Empty string Empty string



KDE Access via PostgreSQL

Note: This feature is available only to organizations with SQL enabled. For additional information please contact Kentik support.

In addition to accessing your KDE-stored data via the Kentik Detect portal or the Kentik APIs (see Query API), you can connect to KDE via any PostgreSQL client (ANSI SQL compliant). To do so you’ll need the information shown below, which is unique to each organization but the same for all users within each organization. The information is available from the Customer Information pane of your User Profile Page (click the username at the right of the Kentik Detect navbar):

  • KDE host:port: The domain name of the Kentik Detect host used for your organization, and the Kentik-assigned port through which users registered to your organization access the KDE datastore.
  • KDE user: A Kentik-assigned unique identifier for your organization.

Using the above information, build a command line with the following pattern:

psql -h {host} -p {port} -U {user} -W

When you execute the command, a prompt will request a password, which is actually the name of your organization’s database in KDE. You can find the value to enter in the Customer Information pane of your User Profile Page. The value is the string following (but not including) the -W in the field labeled “PSQL command line.”

Note: The prompt response is not the same as the string in the field labeled “KDE password.”


In this article: