The market for application hosting has been rapidly evolving over the past few years, driven by technical advances (containerization, automation, AI, XaaS, virtualization, etc.), changing user expectations, mobilization, and skyrocketing demand for digital transformation.
The result is both an explosion in new offerings and considerable confusion on the part of DevOps teams on how best to approach application hosting.
This white paper seeks to define and clarify the modern hosting landscape so organizations can readily recognize which options best meet their application needs. The resulting categorization starts from the premise that Cloud and Content Delivery Network offerings are steadily converging to automate the orchestration and distribution of general-purpose workloads, resulting in both improved application characteristics, and decreased cost and management overhead.
Convergence of Cloud and CDN
Historically, hosting solutions have fallen into two different camps.
The first option, cloud compute, is designed around general-purpose workloads (that is to say, applications) hosted in discrete, centralized locations (i.e., not distributed). This is the world of hyperscalers such as Amazon Web Services, Microsoft Azure and Google Cloud Platform, along with many other providers large and small.
While it is possible to distribute applications more broadly using cloud compute – either with a hyperscaler or using a multi-cloud strategy – this is complex, and therefore atypical. It is more likely that an organization will select a particular regional datacenter for deployment, accepting the associated drawbacks in latency, resilience, etc. To the extent an organization elects distributed cloud compute, that distribution is largely a manual process that is up to the organization to coordinate and manage, one that drives up cost exponentially and requires considerable operational resources and sophistication (as well as additional tools such as global load balancers) to administer effectively.
The other hosting option is a content delivery/distribution network offered by companies such as Akamai, Cloudflare, Fastly, and many others. CDNs, as inferred by their name, are designed to facilitate content availability (websites, video, music, etc.) by caching and distributing that content as widely as possible. However, CDNs are neither designed nor suited to distribution of general-purpose workloads.
What has been historically lacking is the ability to excel at delivery of general-purpose workloads (a la Cloud) and broadly distribute those workloads while the hosting organization maintains responsibility for managing that distribution (a la CDN). This is the provenance of the Distributed Compute market.
The Best Hosting Solution
Every company wants the best possible system for application delivery, but what constitutes “best” can differ from organization to organization based on specific application/user needs, technical sophistication, organizational skills and resources, etc. That said, it’s possible to start with some basic assumptions around desired outcome and developer intent.
In general, organizations want applications to offer:
- The best possible experience - defined as highly performant, readily available, resilient and reliable, etc.
- With the least risk – defined as secure and compliant, but also future-proof around scalability, technology adoption, infrastructure flexibility, etc.
- At the lowest overall cost – both monetarily and in terms of resource investment around operations, administration, personnel, etc.
These considerations can be used to help clarify the application delivery landscape.
This white paper builds off the premise that distributed compute is desirable, an assumption that deserves inspection. What does application distribution achieve?
Section has written extensively on the importance of distribution for modern compute. Our white paper on The Edge Compute Equation walks through a demonstration of the advantages of distributed deployments versus centralized cloud delivery in equation format. Another paper on Why Organizations are Modernizing Applications with Distributed Multi-Cluster Kubernetes Deployments examines in detail a particular application of the distributed compute model involving containers in general and Kubernetes in particular.
To summarize our position: distributed compute offers the potential for broad benefits in terms of improved performance, increased availability and resilience, better scalability, reduced data backhaul, simplified delivery surface, enhanced workload compliance and isolation, lack of vendor lock-in, decreased cost and more. To paraphrase The Edge Equation white paper, all things being equal, it’s simply better to broadly distribute applications instead of centralizing them in a cloud or data center environment.
Which begs the question: why aren’t more applications run on distributed compute? The answer is also simple: distribution is hard. In fact, it’s so difficult that no less an authority than Google touts the advantages but then cites overwhelming complexity when advising against multi-region deployment for most users in its documentation on Best Practices for Compute Engine Regions Selection for GCP.
Static vs Dynamic Compute
An important additional consideration is that for most modern applications, the compute landscape is dynamic, not static. That is to say that shifting traffic patterns, application changes or updates, network availability and other criteria would often dictate that – in an ideal world – organizations should be adjusting their hosting decisions continuously. This need for dynamism is accelerating thanks to trends such as 5G and edge compute, which increase the theoretical opportunity (but not ability) to dynamically adjust to changing demand.
That said, this level of dynamic responsiveness is clearly impossible through manual operation and intervention, which is how most distributed compute is delivered today. One possible alternative: computationally monitor and adjust application delivery – that is to say, “automate” this dynamic compute – in real time. More on that in a moment.
The Distributed Compute Landscape
Given these criteria, three factors rise to the top in defining the modern hosting landscape:
That is, where does a distributed hosting solution fall on the spectrum of modern core compute capabilities and offerings? This can range from serverless functions, to containerization (single containers or multi-cluster microservice apps), to virtual machines. (note we have excluded co-lo and bare metal from this landscape as these compute solutions are inherently not distributable)
How is hosting distribution achieved? Considerations here include whether customers are required to bring their own infrastructure, whether placement selection is discrete vs non-specific, or dynamically location-aware.
How is distrubuted operational complexity addressed? Considerations include whether orchestration is static (set and forget), organizations are required to bring their own routing capability and capacity or orchestration is abstracted and dynamically managed through automation.
After combining the last two related factors, application distribution and orchestration of that distribution, as a single vector, these considerations can be used to specify an X-Y axis for a new hosting landscape.
The Evolution of Distributed Compute
Our point of view on this evolving market is that application hosting is steadily advancing toward the ability for apps to intelligently traverse the world’s available compute to run securely in the right location at the right time. Consequently, developers will need to focus less on manually and arbitrarily choosing and managing hosting locations and rather they can specify hosting requirements based on their application intent (reliability, performance, compliance, cost, etc.). This intent will be specified through policy-based rules used by automated orchestration systems to dynamically and optimally determine workload distribution.
The logical outcome and impact of this distributed compute would be:
Simple application delivery
To simplify delivery, distributed compute should abstract both location (run where needed) and compute architecture (clusterless as opposed to managing servers, operating systems or orchestration systems (such as Kubernetes)). Given an automated system to handle placement selection and application orchestration, teams will even more so have the opportunity to focus on application development and business logic; massively extending the shift in abstraction delivered to the market as a result of the Cloud computing revolution. .
Cloud Native Adoption
Key considerations for development and management of applications across multi-vendor, multi-location, distributed hosting infrastructure will revolve around tool selection to support the development and operations processes. Developers and ops teams should be able to use industry standard tooling (such as cloud-native tools) rather than adopt complex, vendor-specific solutions. We expect the adoption of Cloud Native technologies to continue to increase.
Optimal Performance Versus Cost
Computationally distributed compute can place application workloads as close to users as possible, allowing applications to run within milliseconds of users worldwide. With a computation approach to the optimization, such a system can also optimize for resource utilization and/or a combination of both to always deliver the best performance for cost outcome.
Automated orchestration will manage application delivery based on intent, and adjust for compliance needs, shifting traffic patterns or application changes. For example, a developer should be able to specify something like “run containers only in Europe and where there are at least 20 HTTP requests per second” and have the distributed compute platform limit the target deploy field and continuously adjust within that field accordingly.
Reliable and available
Distributed compute providers are likely to adopt federated networks featuring multiple providers (i.e., multi-cloud) to facilitate the ability to “run anywhere”. If done properly, this has the added benefit of ensuring consistent availability through real-time failure detection and re-routing.
Growth for distributed compute vendors will likely come from introducing solutions up/down the compute spectrum, to include application functionalities, microservices and containerized applications, virtual machines, etc.
Distributed Compute with Section
Section is a Cloud-Native Hosting system that continuously optimizes orchestration of secure and reliable global infrastructure for application delivery. Section’s sophisticated, distributed and clusterless platform intelligently and adaptively manages workloads around performance, reliability, compliance, cost or other developer intent to ensure applications run at the right place and time. The result is simple distribution of applications across town or to the edge, while teams continue to use existing tools, workflows and familiar rules-based policies.
If you’d like to find out more about how Section is making application distribution a reality, please schedule a demonstration.