APPLICATION MODERNIZATION

Incremental App Migration from VMs to Kubernetes — Pitfalls, Pipelines, and Avoiding Complexity

Migration guidelines for implementing continuous delivery and avoiding common pitfalls and antipatterns

Daniel Bryant
Ambassador Labs
Published in
5 min readJul 9, 2019

--

One of the core goals when modernizing software systems is to decouple applications from the underlying infrastructure on which they are running. This can provide many benefits, including workload portability, integration with cloud AI/ML services, reducing costs, and improving/delegating specific security aspects. The use of containers and orchestration frameworks like Kubernetes can decouple the deployment and execution of applications from the underlying hardware.

In the previous article of this series, I explored how to begin the technical journey within an application modernization program by deploying an Ambassador API gateway at the edge of your system and routing user traffic across existing VM-based services and newly deployed Kubernetes-based services.

This article builds on this journey and provides an overview of how you can plan the migration and also provides guidance on containerizing workloads and some networking gotchas to watch out for. The next article in the series will use a service mesh, like HashiCorp’s Consul, to route service-to-service traffic seamlessly across all platform types, regardless of whether your applications have been containerized or not.

Planning a Migration: Common Pitfalls

I’m going to assume that you are already sold on the benefits of modernizing your application stack, but there are some caveats that need to be stated upfront:

  • You can’t expect to migrate your stack overnight. There are too many moving parts and complexity within a typical existing (legacy/heritage) stack. Any migration needs to be planned and undertaken in a piecemeal fashion The plan and the underlying infrastructure need to be flexible enough to adapt; for example, if one team decides that they will continue to run their applications on VMs for the next year but also wants to utilize the new SSO authentication or rate limiting protection. Your migration must be resilient and capable of adapting to the inevitable issues you encounter.
  • You should not plan to rollout a cloud migration as a big bang. Even for teams with a relatively small IT estate, the amount of risk involved with updating practically anything in a big bang fashion is too high, let alone changing your entire underlying infrastructure stack. Your migration must support incremental rollout.
  • You will have to ensure that all teams (both dev and ops) understand the new technologies and update their shared mental models accordingly. Traditionally, operations may have thought of an infrastructure platform consisting of compute nodes and layer 3/4 networking that they fully control. Typically the concept of component identity within the system is thought of as an IP address and ports. In tandem, developers often believe that the configuration of the underlying platform infrastructure and communication properties, such as service discovery, security, and rate limiting, is “someone else’s problem”. Migration towards cloud technologies must ensure that everyone embraces the concept of a shared, self-service platform, that system identity is based around service identity, and dev and ops work together to configure runtime communication properties of applications.

Migration Tactics

Given the above requirements, let’s now look at several tactics of how this might be implemented.

Packaging in Containers

I talked about the challenges of packaging existing “heritage” applications within containers at DockerCon EU last year in “Continuous Delivery with Docker Containers and Java: The Good, the Bad, and the Ugly”. The talk focuses on the Java platform, but there should be useful takeaways for other language stacks.

If you have subscribed to Docker Enterprise, then the Docker team provides several tools for automatically packaging existing .NET applications into a Docker container. There are also initiatives by other organizations, such as Google, who have released the Jib container build tool.

The CloudBees and Red Hat teams provide buildpack style integration with their Jenkins X and OpenShift tooling, respectively, that assists with automatically generating a Dockerfile for existing applications. There have been demonstrations at previous DockerCons’ where technology like the Cloud Native Application Bundle (CNAB) has been combined with CLI tooling to automatically package applications.

Adapt Your Delivery Pipeline

Containerizing existing applications can require some shell script magic, but fundamentally the approach to this task is quite formulaic — understand how your application runs now and replicate this within a container. The biggest challenge is often verifying that the app runs correctly over various use cases. To perform this quality assurance, you will typically need to enhance your delivery pipeline or create one if you don’t already have this in place. Delivery pipeline tooling like the previously mentioned Jenkins X will help here, and there are a variety of open source and commercial products, too, such as CircleCI, GoCD, and GitLab.

I talked about adopting a continuous delivery pipeline to build containers in my DockerCon EU presentation, and the accompanying example project provides some practical demonstrations. The key takeaway is to ensure that you execute all of your component-level and service-integration tests against the application or service running with a container.

I have seen some organizations continue to execute tests as they always have done against the application binary and then package the app in a container as the final step of the pipeline. This approach frequently results in problems, as container technology can subtly alter the runtime characteristics of the infrastructure, such as limiting CPU time or memory, differing I/O performance from an underlying block store, or not providing enough entropy via /dev/random to run cryptographic operations such as token generation

Watch for Network Complexity

In the next article in this series, I will demonstrate how to use the Consul service mesh to extend the example application included in part one of this series, which was deployed on Google Cloud Platform VMs and Kubernetes. However, it is worth talking about one of the primary issues you will encounter is the need for a fully connected network, which typically means using either a flat network or a series of routers or gateways to bridge disparate networks.

There are several users of Ambassador that use this to segment networks or join existing segments, and other organizations like HashiCorp and Rancher are working on implementing gateways that can bridge multiple clusters with Consul Gateways and Submariner, respectively.

Stay Tuned

In this second article in the series on application migration, I have signposted some of the challenges that the Datawire team and I have seen when customers are modernising applications. In the next article, I’ll introduce the use of the Consul service mesh, and demonstrate how this integrates with Ambassador in order to ease the transition between VM-based applications and container-based services.

If you have any questions, please contact us via the website or at @ambassadorlabs on Twitter.

--

--

DevRel and Technical GTM Leader | News/Podcasts @InfoQ | Web 1.0/2.0 coder, platform engineer, Java Champion, CS PhD | cloud, K8s, APIs, IPAs | learner/teacher