Kubernetes Edge Interface

The easiest way to distribute your Kubernetes workload around the globe without having to manage multiple clusters across various cloud and infrastructure providers.

kubernetes edge interface

A simple Kubernetes deploy process

The Kubernetes Edge Interface (KEI) makes many edge clusters appear as a single cluster, enabling developers to quickly and easily deploy applications across a distributed Edge.
Kubernetes Edge Interface

Powered by Adaptive Edge Engine

Edge routing, healing, scaling, placement and orchestration.

Section's Adaptive Edge Engine intelligently and continuously tunes your edge delivery network to ensure edge workloads are running in the optimal locations to maximize performance and cost benefits.

How KEI Works

Simplicity, flexibility, and control with Kubernetes-native tooling.

Standard Kubernetes Patterns

KEI is an implementation of the standard Kubernetes API.

Zero Code Modifications

Developers use their existing Kubernetes tools to manage applications.

Dynamically Optimized Edge

Multi-cloud, multi-region support, automated traffic routing, high availability via standard configuration.

“As a network observability company, Kentik has a global view of the internet combining passive and active measurements. Partnering with Section allows us to quickly and easily augment our edge deployments, and their cloud-native platform and partnerships make it easy and affordable to integrate as we continually expand our footprint.”

Avi Freedman

CEO, Kentik

Frequently Asked Questions

How do developers use KEI?

Developers can target their existing kubectl and helm tools at the KEI. The existing tools continue to work because the API is standardized. KEI makes many edge clusters (on Section) appear as a single cluster.

What Kubernetes objects does KEI use?

KEI permits a subset of Kubernetes objects, including: Namespace, NetworkPolicy, Deployment, ReplicaSet, Pod, Service, ConfigMap, Secret. In addition, KEI extends the Kubernetes API to incorporate LocationStrategy, TrafficRoutingStrategy, HealthCheckStrategy.

How does pricing work?

Section’s Edge Hosting solutions operate a pricing model that is consistent with familiar cloud hosting solutions. Pricing is usage-based with RAM and CPU being the key drivers.

Do I need to set container sizes, locations or modify any desired replica counts to deploy my application to Section?

No. You do not need to modify any of these settings. You can deploy with Section immediately with our defaults and then modify or tune container sizes, replica counts and placement strategies at any time in future as needed. (Note: Our default settings for your container size are 1vCPU and 2GiB RAM for each container deployed.)

Is there a default setting that applies in case I don’t know what inputs to use?

Yes. Our default Edge settings mean your application scale and locations will be managed for optimal performance and minimum cost by Section’s intelligent Adaptive Edge Engine. The Adaptive Edge Engine will orchestrate and scale just the right amount of resources for your application in just the right locations using a constantly adjusting dynamic process.

Ready to jump in?

Get started with Section to experience all the benefits of an optimized Edge.