Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Get a Grip on Your Multi-Cloud Kubernetes Deployments

public://pictures/pavel_nikolov.jpeg
Pavel Nikolov Software Engineer, Section
Photo by Pat Whelen on Unsplash
 

There are lots of reasons a multi-cloud strategy makes sense for a business. For example, perhaps you don’t want to stay tied to just one cloud provider in case prices increase. Or maybe you want to ensure reliability by eliminating a single point of failure. Or it could be that you want to use multiple providers to make sure that you have capacity when and where you need it most. All are valid.

As I observed before in an earlier TechBeacon article, multi-cloud adoption is gaining traction as a response to preparing for the next inevitable cloud outage. The data bears this out; a recent report by Statista found that 57% of surveyed organizations reported using two or more clouds in their organizations.

Fortunately, modern applications are making multi-cloud deployments more feasible through modular, containerized architecture insofar as containers can rapidly span more than one cloud provider. Kubernetes clusters are often used to simplify multi-cloud management and orchestrate containers on individual underlying cloud systems. Because Kubernetes can provide a common platform across nonhomogenous cloud infrastructure, it allows DevOps teams to work in a common structure to manage and deploy the applications themselves—even if deploying and managing the Kubernetes layer differs from cloud to cloud.

Yet, as I pointed out in the earlier TechBeacon article, adopting a multi-cloud strategy isn’t as easy as hitting a "go" button.

Let’s say your organization is considering a global multi-cloud deployment. What does it take to manage an application that spans multiple cloud providers worldwide—moving workloads closer to users to maximize performance, for example, or using different providers to ensure capacity, availability, and resiliency when and where you need it? The short answer today is: It’s complicated.

Let’s assume you want to use a giant cluster that has a wide range of nodes available to handle workloads around the globe. It would be cost-prohibitive to have all those nodes running across cloud providers all the time; you’re going to need a hands-on approach to somehow schedule nodes to run when and where needed. Ideally, you would spin them up/down and move them around in response to user needs and demand.

To manage that giant cluster, you would use Kubernetes autoscaling components. Cluster Autoscaler can increase or decrease the number of nodes in your cluster; Horizontal Pod Autoscaler increases the number of replicas of your application; Vertical Pod Autoscaler increases the resources that are used by a pod.

But if you want to run a single large cluster, how do you know where to add nodes? There are no Kubernetes extensions that address that challenge. Additionally, this single giant cluster—spanning multiple regions and cloud providers—would be extremely difficult to maintain. Your operations team would spend much of their time focused on constantly fixing problems that arise. (Just imagine the single point of failure that would occur when you might need to replace your DNS.)

A better approach would be to have many smaller clusters strategically placed (say, for example, one each in Sydney, Hong Kong, Paris, Amsterdam, New York, and Los Angeles), with each of these points of presence existing on a different cloud provider. You would then be able to run your workload everywhere.

At the same time, as you can probably guess, this approach would increase your public-cloud costs. Moreover, you’re still facing a cloud-orchestration problem. What if your usage spikes only during the workday? Wouldn’t it be better to have the workloads follow the sun? You could spin up resources at 8 am in Paris and spin them down elsewhere—and then do the same at 8 am in New York, and so on.

Unfortunately, here again, this would be incredibly challenging for even the largest ops team to manage manually. And while automation tools exist, what happens if traffic begins to spike in Sydney while it’s daytime in Europe? Prioritizing those workloads to scale and follow user demand becomes incredibly cumbersome to manage.

Developers and operations teams are going to need to find ways to orchestrate workloads around the globe on different cloud providers in different regions. Coming up with creative approaches to do so, based on developer intent (such as, say, running containers only in Europe and where there are at least 20 HTTP requests per second or maintaining at least two replicas for reliability but on no more than 15 servers due to budget constraints) will become a crucial balancing act.

Of course, technology will catch up eventually, as it always does, to mitigate headaches related to infrastructure provisioning, workload orchestration, scaling, and traffic routing. In the meantime, however, one tangible way to deal with this complexity is to map complex data from the cloud with operations goals into a network-footprint-optimization system. By constantly and dynamically measuring and mapping the application needs (for instance, real-time traffic levels versus compliance and security requirements, versus cost restrictions or targets, and versus reliability and performance goals) as an abstracted computation, you can constantly recalculate what the optimal delivery footprint is for each application in perpetuity. From there, you can feed that information back into the orchestration system to dynamically reorganize itself to that new footprintand then reshape traffic to fit the available footprint.

Regardless of the approach, getting a grip on managing Kubernetes clusters to streamline workload orchestration across hundreds (or even thousands) of endpoints across multiple clouds and regions will increasingly become an art for DevOps teams.

Keep learning

Read more articles about: Enterprise ITCloud