There’s no doubt we’re now living in a hybrid multi-cloud age. Companies are using major cloud providers like AWS, Azure and GCP with niche cloud computing providers emerging within the market. Simultaneously, many organizations still retain workloads in on-premises data centers or embrace colocation. Regardless of deployment pattern, DevOps teams desire the same elasticity as the cloud, but it’s becoming increasingly difficult to consistently manage the nuances of each service provider.
To support today’s distributed polyglot architectures, a common layer could ease issues when deploying applications and provisioning various servers. By decoupling the infrastructure control plane from specific cloud providers, a unified management console could bring visibility into cost and performance, empowering CloudOps with knowledge before they commit. But engineering such an outcome would rely on plugins for each environment and a robust set of APIs to enable programmatic control.
I recently met with Jake Warner, Cycle.io, to explore how increased infrastructure abstraction can solve the above problems. According to Warner, a standard layer for computing management is needed to enable consistent control over multi-cloud and bespoke on-premise environments. Below, we’ll define this concept and consider the benefits infrastructure abstraction could bring to modern enterprise software development.
The Problems Facing Infrastructure Management
We like to think that the cloud is pervasive and that everyone’s on board with a similar computing style. However, this is far from the truth. Multi-cloud is rising, and 95% of businesses are making multi-cloud a strategic priority in 2022, a Valtix study found. Yet only about half of respondents felt they had the appropriate tools and skills required to execute a multi-cloud strategy.
“There’s a lot of value in multi-cloud, including reducing risk, ensuring better uptime and the flexibility to select the best hardware for the use case,” said Warner. “But multi-cloud is extremely complex.” Using multiple clouds means supporting increasingly variable operational processes for each environment. Whether it’s AWS, GCP, Azure, Equinix or Vultr, operators also require a deep level of visibility into telemetry data. This becomes complicated to track as software is increasingly modular and distributed across multiple computing regions.
Furthermore, on-premises data centers are far from dead—surprisingly, a Spiceworks study found that 98% of enterprises still used on-premises servers. Traditional servers are often preferred due to corporate restrictions or to meet security compliance mandates. It could also be a cultural case of “if it works, don’t fix it.” The dilemma is that you can’t easily obtain common cloud features like auto-scaling and load balancing while on your own infrastructure.
Another issue Warner described is that traditional platform-as-a-service (PaaS) companies present inherent limitations if you aim to build substantive applications and services. “Many of these PaaS offerings make it really easy to get started but, more often than not, developers are quickly forced to make compromises with their approach. These compromises can create long-term technical issues, especially when it’s time to scale,” he said. “Need a server with a specific hardware configuration in a targeted region? Good luck. For most PaaS companies, their exit ramp is just as defined as their on-ramp.”
Movements Toward Infrastructure Abstraction
We’re already seeing movement within the market to manage the operational burdens of multi-cloud. One example is the Infrastructure Abstraction Layer (IAL), open sourced by Cycle.io, which Warner explained can act as a standardized system for deployments. With the specification, an infrastructure provider can generate an API endpoint to enable the provisioning of infrastructure without requiring a deep integration between both services. With this endpoint live, an application development platform or orchestrator gains the ability to send standardized JSON payloads to provision servers, allocate IPs, configure regions and a number of other requests. The response, another JSON payload, confirms that the desired request has been completed and includes a unique ID.
With more programmatic control over infrastructure, an engineer could interface with a standardized list of all the servers, across a number of providers, to deploy to and act upon this information in one unified console. This layer is an opportune location to help compare costs and server configurations for different regions too. It’s also an advantageous spot to filter by proximity to showcase the closest available servers to optimize delivery.
Benefits of Unifying Multi-Cloud Deployment
Decoupling the infrastructure management layer from service providers can bring potential benefits to DevOps, said Warner. For one, it enables operators to extend infrastructure automation into their private data centers, helping them achieve the same programmability they would have in the cloud. A degree of standardization can remove many of the cons of multi-cloud, too—”Developers can move from one provider to another provider without any major changes along the way,” he said.
Of course, IT doesn’t want an added burden. This is why an API-based solution is required to enable programmability, said Warner. By taking a RESTful API approach to managing a hybrid infrastructure, operators could create custom flows based on fine-grained environmental characteristics that change over time, like latency, expense or location. “One of the neat things is it allows infrastructure providers to constantly make tweaks and improvements without having to resubmit their integration with an underlying platform,” said Warner.
The Flexibility of LowOps
Today’s software deployment is increasingly wide and varied—a recent HashiCorp report found that 76% of organizations are already multi-cloud. Multi-cloud brings many advantages, yet it also multiplies the number of operational tasks required. At the same time, organizations want to leverage investments into on-premises data centers for as long as they are economically viable. Automating DevOps for such a varied infrastructure can be difficult without a unified layer decoupled from each niche computing area.
According to Warner, companies require a LowOps platform and the flexibility to abstract this automation on their own infrastructure when appropriate, which can vary widely. For example, operators may be working with a fleet of tens of thousands of IoT devices. Furthermore, as we’ve covered previously, a LowOps method can be beneficial to small-to-medium-sized companies that may lack the resources to support complex infrastructure like Kubernetes.
Could something like IAL ever become an open standard? Well, at the time of writing, the IAL specification has only been implemented by Cycle.io and a few of its infrastructure partners, but there’s no reason it couldn’t be extrapolated into a common specification down the line, said Warner. He even foresees a community marketplace or shared registry for infrastructure abstractions, enabling platforms to support hundreds of cloud providers.
In conclusion, the need for LowOps, combined with the rise of multi-cloud and the staying power of on-premises data centers, is fueling a requirement for a decoupled management center. Infrastructure abstraction could enable the convenience of a PaaS while empowering operators with custom infrastructure choices.