APPLICATION MODERNIZATION

Incremental App Migration from VMs to Kubernetes — Routing Traffic Across Platforms & Clouds

Using the Ambassador API gateway and Consul to route traffic across multiple platforms and infrastructure

Daniel Bryant
Ambassador Labs
Published in
5 min readMar 13, 2019

--

At Ambassador Labs, we see more organizations migrating to their “next-generation” cloud-native platform built around Docker and Kubernetes. However, this migration doesn’t happen overnight. Instead, we see the proliferation of multi-platform data centers and cloud environments where applications span both VMs and containers. The Ambassador Edge Stack API gateway is being used in these data centers as a central point of ingress, consolidating authentication, rate limiting, and other cross-cutting operational concerns.

This article teaches how to use Ambassador Edge Stack as a multi-platform ingress solution when incrementally migrating applications to Kubernetes. We’ve added sample Terraform code to the Ambassador Pro Reference Architecture GitHub repo, which enables the creation of a multi-platform “sandbox” infrastructure on the Google Cloud Platform. This will allow you to spin up a Kubernetes cluster and several VMs, and practice routing traffic from Ambassador to the existing applications.

Edge Routing in a Multi-Platform World

You can use an edge proxy or gateway to help with a migration from a monolith to microservices or from on-premises to the cloud. Ambassador can act as an API gateway or edge router for all types of platforms.

It is trivial to configure traffic routing from the cluster to external network targets, such as endpoints within VPNs or virtual private clouds (VPCs), cloud services, cloud load balancers, or individual VMs. Ambassador can route to it if you have network access to the endpoint.

Our Ambassador Pro Reference Architecture GitHub repo contains several folders that provide documentation and examples to help you understand how best to use all of the features that Ambassador supports, like rate limiting and distributed tracing.

A “cloud-infrastructure” folder that contains the necessary Terraform code and scripts to spin up a sample multi-platform VM / Kubernetes infrastructure using Google Cloud Platform (GCP). The resulting infrastructure stack is shown below:

Building an Example VM / Kubernetes Platform

The Terraformed infrastructure example provided in the Ambassador Reference Architecture repo will create a simple regional network in GCP with a Kubernetes (GKE) cluster and several VM-based services deployed behind (publicly addressable) load balancers.

The application deployed on the VMs has been taken from my “Docker Java Shopping” example of a very simple e-commerce shop. This consists of two Java services using Spring Boot and one using Dropwizard.

Deploying Ambassador within the Kubernetes cluster enables the simplification of ingress for the entire network and allows the engineering team to centralize and standardize the management of this gateway.

Centralizing operations of the gateway and edge of the network provides many benefits, such as:

  1. Reducing “authentication sprawl” and standardizing cross-cutting concerns such as TLS termination or pass-through, context-based routing (e.g., using Filters to route based on HTTP headers), and
  2. Rate limiting.

After cloning the reference architecture repo, navigate to the GCP Terraform code folder. You will find a README with step-by-step instructions required to replicate our configuration. Be aware that spinning up this infrastructure will cost you money if you are outside of your GCP free trial credit:

$ git clone git@github.com:datawire/pro-ref-arch.git
$ cd pro-ref-arch/cloud-infrastructure/google-cloud-platform

Once you have everything configured and have run terraform apply successfully (which may take several minutes to complete), the infrastructure shown in the diagram above will have been created within your GCP account. You will also see some outputs from Terraform that can be used to configure your local kubectl tool, and also set up Ambassador.

...
Apply complete! Resources: 15 added, 0 changed, 0 destroyed.
Outputs:gcloud_get_creds = gcloud container clusters get-credentials ambassador-demo --project nodal-flagstaff-XXXX --zone us-central1-f
shop_loadbalancer_ip_port = 35.192.25.31:80
shopfront_ambassador_config =
---
apiVersion: v1
kind: Service
metadata:
name: shopfront
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: shopfront_mapping
prefix: /shopfront/
service: 35.192.25.31:80
spec:
ports:
- name: shopfront
port: 80

The first output, with the name gcloud_get_creds, can be run to configure your local kubectl to point to the newly Terraformed Kubernetes cluster e.g. from the output above, I would run at my local terminal:

$ gcloud container clusters get-credentials ambassador-demo --project nodal-flagstaff-XXXX --zone us-central1-f$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.59.240.1 <none> 443/TCP 28m

You can now install Ambassador into the cluster by following the Getting Started instructions, or the quick-start in the README. Once the gateway is up and running and you have obtained the external GCP load balancer IP for the Ambassador Kubernetes Service, you can now deploy an Ambassador Mapping that routes to a GCP load balancer that is located outside of the Kubernetes cluster. I’ve deliberately kept the network routing and firewall rules simple with the current infrastructure, but future iterations of this tutorial will introduce more challenging configurations.

The Terraform output named shopfront_ambassador_config provides Kubernetes configuration that can be copy-pasted into a YAML file and applied to the cluster. You should then be able to access the Shopfront service that is running on a VM (and communicating with other upstream services also running on VMs) via the Ambassador IP and the associated mapping e.g.: http://{AMBASSADOR_LB_IP}/shopfront/

If all goes well you should be able to see the following in your browser:

We are keen to add more complexity, by creating network segments with peered VPCs and more complicated firewall rules. We will also be looking to demonstrate using Kubernetes ExternalName services and Consul Connect to implement a multicluster service mesh to implement full end-to-end TLS.

When you’ve finished experimenting with the Terraformed infrastructure, don’t forget to delete this and clean up; otherwise you could face an unexpected GCP invoice!

$ terraform destroy -force

Wrapping Up

We will continue to iterate on the example infrastructure code and plan to support additional cloud platforms like Digital Ocean and AWS. Please do reach out to me if you have any particular requests for cloud vendors or complicated routing scenarios.

We will continue to iterate on the example infrastructure code and plan to support additional cloud platforms like Digital Ocean and AWS. Please do reach out to me if you have any particular requests for cloud vendors or complicated routing scenarios.

As usual, you can also ask any questions you may have via Twitter (@ambassadorlabs), our Slack Community or via GitHub.

--

--

DevRel and Technical GTM Leader | News/Podcasts @InfoQ | Web 1.0/2.0 coder, platform engineer, Java Champion, CS PhD | cloud, K8s, APIs, IPAs | learner/teacher