APPLICATION MODERNIZATION

Incremental App Migration from VMs to Kubernetes — Routing Traffic with a Service Mesh

Using Ambassador API gateway and Consul service mesh to incrementally migrate applications

Daniel Bryant
Ambassador Labs
Published in
12 min readJul 25, 2019

--

In the previous parts of these articles I explored how you might deploy the Ambassador Edge Stack API Gateway within Kubernetes to route user-generated ingress traffic to an application hosted on an out-of-cluster VM as part of migration towards cloud and container platforms. For the first example, I used cloud-based load balancers deployed in front of each VM-based service as server-side service discovery, load balancing and routed ingress traffic via Ambassador to these load balancers. This is often a good starting point for many organizations, but as can be seen from several KubeCon talks, the operational costs and maintenance overhead of this can cause problems when running at a large scale.

This article builds on the previous example and demonstrates integrating the Kubernetes-native Ambassador API gateway with HashiCorp’s Consul service mesh for service discovery, which removes the need to use internal cloud-based load balancers. I won’t be using the full power of Consul in this example, such as mTLS and Mesh Gateways, as it would be a lot to explore within a single article. Instead, I’ll save this for a future article. Here we are focusing on integrating Ambassador with Consul to route ingress traffic to any target service, regardless of where that service is hosted, providing the Kubernetes pods can route to the target i.e, you have a flat network or are using techniques like IP aliasing within a cloud environment.

Let’s first make clear the goals of this article.

Goals of this Tutorial

As I discussed in my HashiConf EU summary blog post, it is evident from the experience of both the HashiCorp and Datawire teams that many organizations are migrating to a multi-cloud, multi-platform, and multi-service world. This means that the core communication and networking technologies you choose have to be capable of supporting many different platforms.

Ambassador was designed and built as Kubernetes-native, as this allowed us to leverage battle-tested functionality provided by the platform like state management and auto-rescheduling of processes, but the gateway can ultimately route to any target. In this post you learn how to integrate Ambassador with Consul with the goal of routing ingress traffic to an endpoint managed using the dynamic service discovery capabilities of Consul.

Deploying the Terraformed Playground

This article assumes that you have followed along with part 1 of the series, and that you have deployed the Terraformed Kubernetes and VM instances into Google Cloud Platform. I’ll provide an outline of how to get started with the Terraform code below, but many of the concepts that I’ll discuss here are building on those introduced in the original article.

After cloning the reference architecture repo, navigate to the folder containing the GCP Terraform code, and you will find a README with step-by-step instructions required to replicate our configuration. Be aware that spinning up this infrastructure will cost you money if you are outside of your GCP free trial credit:

$ git clone git@github.com:datawire/pro-ref-arch.git$ cd pro-ref-arch/cloud-infrastructure/google-cloud-platform

Once you have created your secret-variables.tf file and have configured both your GCP account and Terraform, you can run terraform apply (which may take several minutes to complete). The infrastructure shown in the diagram above will have been created within your GCP account. You will also see some outputs from Terraform that can be used to configure your local kubectl tool, and also set up Ambassador. Upon a successful run, you should see something similar to the following output:

Apply complete! Resources: 6 added, 0 changed, 1 destroyed.Outputs:gcloud_get_creds = gcloud container clusters get-credentials ambassador-demo --project nodal-einsten-XXXXXX --zone us-central1-f
shop_loadbalancer_ip_port = 104.197.17.50:80
shopfront_ambassador_config =
---
apiVersion: v1
kind: Service
metadata:
name: shopfront
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: shopfront_mapping
prefix: /shopfront/
service: 104.197.17.50:80
spec:
ports:
- name: shopfront
port: 80
shopfront_ips = 35.239.47.184

Now that you have your playground deployed, let’s set about deploying Ambassador and Consul.

Deploy Ambassador

If you are a new user of Ambassador, then I would highly recommend exploring the getting started documentation. The docs will provide more details than my summary of the install below.

$ # configure my local kubectl to talk to the new k8s cluster
$ gcloud container clusters get-credentials ambassador-demo --project nodal-einstein-XXXXXX --zone us-central1-f
$ # set up RBAC
$ kubectl create clusterrolebinding my-cluster-admin-binding --clusterrole=cluster-admin --user=$(gcloud info --format="value(config.account)")
$ # deploy Ambassador!
$ kubectl apply -f k8s-config/ambassador.yaml
$ kubectl apply -f k8s-config/ambassador-service.yaml

This is all it takes to deploy Ambassador and expose this as an externally-facing load balancer. Now, onwards to the Consul installation!

Install Consul into Kubernetes

The easiest way to install Consul in this example is to deploy this into Kubernetes using Helm. Assuming that you have installed the Helm package manager tools locally, you can configure Helm on the Kubernetes cluster created above.

$ # create appropriate service accounts
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created$ kubectl create serviceaccount --namespace kube-system tillerserviceaccount/tiller created$ helm init --history-max 200 --service-account tiller$ git clone git@github.com:hashicorp/consul-helm.git -b v0.8.1

At this point, you will need to copy the consul-helm-values/ambassador-consul-values.yaml file from the current project directory to the freshly cloned project consul-helm directory and begin the installation

$ cp consul-helm-values/ambassador-consul-values.yaml consul-helm
$ cd consul-helm
$ helm install --name=consul -f ambassador-consul-values.yaml ./

Pre-flight Check

At this point you should have a working Ambassador and Consul deployment within Kubernetes. Let’s verify everything with kubectl get svc :

The above output can appear slightly confusing, especially with a small terminal window. This is mainly due to the long Consul Server Service name and the corresponding amount of ports exposed by this service! However, the important thing is that you see all of the Ambassador and Consul services in your deployment.

You can view the Consul UI by copy-pasting the “consul-consul-ui” service’s EXTERNAL-IP into your browser e.g.

You can also view the Ambassador diagnostic console, but as this is exposed via a NodePort, you will have to use kube port-forward to forward port 8877 locally to an Ambassador Pod. The following steps show how to accomplish this:

If you now navigate to localhost:8877, you should now be able to view the Ambassador diagnostic UI:

You can see the Ambassador version information and links to documentation in the top two sections of the UI, and you can view the route table in the bottom section.

You’ll also notice that a list of “Ambassador Resolvers in Use” is displayed in the middle section of the window. At the moment you are only using the KubernetesServiceResolver, but you also want to enable the ConsulResolver. You’ve actually done half of the work to enable the Ambassador and Consul integration without even realizing it. If you look in the ambassador-service.yaml file, you will see the following configuration

---
apiVersion: v1
kind: Service
metadata:
name: ambassador
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: ConsulResolver
name: consul-dc1
address: consul-consul-server.default.svc.cluster.local:8500
datacenter: dc1
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- port: 80
targetPort: 8080
selector:
service: ambassador

In addition to configuring an external LoadBalancer that exposes Ambassador to external users, you can also see the ConsulResolver definition, which specifies the address of the Consul server and data center. As you have deployed Consul within the Kubernetes cluster, you can use the Kubernetes-style FQDN to point to the corresponding service. You are also free to deploy Consul outside of the cluster, and providing that the Ambassador pods can route to the FQDN location specified here, everything should work as expected.

Let’s now connect up our applications running on VMs to the Consul cluster.

Extend the Consul Cluster to the VMs

The easiest way to get an external VM to join a Kubernetes-hosted Consul is via the consul-k8s project configuration, which is included within all modern Consul client agents. This allows a Consul agent to query the Kubernetes API in order to determine the location of the cluster nodes that can be joined.

As Consul will require credentials to connect to the Kubernetes API, the quick and easy way to do this is to share your credentials with the Consul agent running on the VM. Note that this isn’t the recommended way to run this in production; instead you should consider creating a separate restricted GCP service account and provide your VM-based Consul agents with these credentials.

It’s worth mentioning here that I have Terraformed the GKE Kubernetes cluster within a VPC-Native cluster, and enabled IP aliasing. This enables routing to and from Kubernetes pods (with their individual IPs) across all other networked resources within the VPC. This effectively creates a fully connected or “flat” network within the VPC, which simplifies the Consul configuration massively. I do want to shout out and offer kudos to Nic Jackson here for helping me figure this out. I experienced quite a bit of pain with the GCP routing until I pair-configured a cluster with Nic!

If you look back at the Terraform Output you will see a shopfront_ips field. The IP address specified here is where you will need to ssh into and install Consul. In my example, the IP address is 35.239.47.184. I’ll demonstrate how to do all of this manually, as it’s a good learning experience, but all of this should really be automated via Terraform and a provisioner.

Note that I specified the ssh username on the VMs as “daniel” via my Terraform variables, and so if you have changed this, you will need to update the commands below to replace my name with yours.

$ export SHOPFRONT_IP=35.239.47.184# specify on the shopfront VM where you want the k8s creds to be stored
$ export K8S_CFG_LOCATION=/home/daniel/.kube
$ ssh -i ~/.ssh/gcp_instances daniel@$SHOPFRONT_IP "mkdir $K8S_CFG_LOCATION"# Copy your local k8s config file to the remote instance -- make sure you don’t have anything private in here
$ scp -i ~/.ssh/gcp_instances ~/.kube/config daniel@$SHOPFRONT_IP:$K8S_CFG_LOCATION/config
# Copy your local k8s credential file to the remote instance -- don’t do this for production use cases
$ scp -i ~/.ssh/gcp_instances ~/gcp_creds/nodal-einstein-XXXXXX-access.json daniel@$SHOPFRONT_IP:$K8S_CFG_LOCATION/creds.json

Now you can ssh to the VM and install the Consul agent.

$ ssh -i ~/.ssh/gcp_instances daniel@$SHOPFRONT_IPLast login: Wed Jul 24 13:50:30 2019 from 94.5.XX.XXdaniel@shopfront-instance-0:~$

In the GitHub repo you cloned earlier, you will find a file consul-agent-install/install_consul_agent.sh. You will need to create a file on the shopfront VM, and copy-paste the content of this file here. As you do this, look through the file and attempt to understand how the Consul agent is being installed. The most interesting content is in the consul.service systemd definition, as this is where the Consul agent is configured to query the Kubernetes API using the config and credentials you uploaded earlier.

[Unit]
Description = "Consul"
[Service]
KillSignal=INT
ExecStart=/usr/local/bin/consul agent -retry-join 'provider=k8s label_selector="app=consul,component=server" kubeconfig=/home/daniel/.kube/config' -data-dir=/etc/consul.d/data -config-dir=/etc/consul.d
Restart=always
Environment=GOOGLE_APPLICATION_CREDENTIALS=/home/daniel/.kube/creds.json

Once you have copy-pasted the complete shell script, you will need to chmod u+x install_consul_agent.sh and then execute this. You can use the systemd journalctl command:journalctl -u consul.service -f to check the Consul service has been installed correctly:

daniel@shopfront-instance-0:~$ journalctl -u consul.service -f-- Logs begin at Mon 2019-07-29 11:52:57 UTC. --Jul 29 12:11:34 shopfront-instance-0 consul[6817]:     2019/07/29 12:11:34 [INFO] agent: (LAN) joining: [10.56.2.12]Jul 29 12:11:34 shopfront-instance-0 consul[6817]:     2019/07/29 12:11:34 [INFO] serf: EventMemberJoin: consul-consul-server-0 10.56.2.12Jul 29 12:11:34 shopfront-instance-0 consul[6817]:     2019/07/29 12:11:34 [INFO] serf: EventMemberJoin: gke-ambassador-demo-ambassador-demo-n-8009e513-50zl 10.56.0.6Jul 29 12:11:34 shopfront-instance-0 consul[6817]:     2019/07/29 12:11:34 [INFO] serf: EventMemberJoin: gke-ambassador-demo-ambassador-demo-n-8009e513-zhtz 10.56.2.11Jul 29 12:11:34 shopfront-instance-0 consul[6817]:     2019/07/29 12:11:34 [INFO] serf: EventMemberJoin: gke-ambassador-demo-ambassador-demo-n-8009e513-n9cz 10.56.1.4Jul 29 12:11:34 shopfront-instance-0 consul[6817]:     2019/07/29 12:11:34 [INFO] agent: (LAN) joined: 1Jul 29 12:11:34 shopfront-instance-0 consul[6817]:     2019/07/29 12:11:34 [INFO] agent: Join LAN completed. Synced with 1 initial agentsJul 29 12:11:34 shopfront-instance-0 consul[6817]:     2019/07/29 12:11:34 [INFO] consul: adding server consul-consul-server-0 (Addr: tcp/10.56.2.12:8300) (DC: dc1)Jul 29 12:11:35 shopfront-instance-0 consul[6817]:     2019/07/29 12:11:35 [INFO] agent: Synced node infoJul 29 12:11:44 shopfront-instance-0 consul[6817]: ==> Newer Consul version available: 1.5.3 (currently running: 1.5.2)

If you look through the log above you can see the Pod IP addresses of the Consul nodes running in Kubernetes (10.56.0.XX) that the Consul agent running on the shopfront VM has joined.

Once you have executed this install script on the VM, you can check if the node has been registered within the Consul cluster. If you load the Consul UI in your browser and navigate to the nodes tab, you should see something similar to this, with the “shopfront-instance-0” showing as a registered node:

Now you can register your Shopfront Java application running on the VM with the Consul cluster.

Create a file on the VM in the location /etc/consul.d/shopfront.json and add the following Consul registration details:

{"service": {"name": "shopfront", "tags": ["springboot"], "address":"UPDATE_ME", "port": 80}}

Due to a (soon to be fixed) bug with Ambassador, you will need to change the “UPDATE_ME” token above to the internal IP address of the VM. In future versions of Ambassador, this won’t be needed, and Ambassador will determine the VM/Consul node IP automatically.

Looking at the Consul screenshot above I can see that the IP address I need to use is 10.128.0.9 I could also run hostname -I on the VM to get the IP address. Replacing the UPDATE_ME token:

{"service": {"name": "shopfront", "tags": ["springboot"], "address":"10.128.0.9", "port": 80}}

After you have saved this file, you can restart the Consul agent to reload the config

daniel@shopfront-instance-0:~$ sudo service consul restart

If you navigate to the Consul UI again, you should now see the service registered:

Now all that’s left to do is route to this external service via Ambassador!

Route to the Shopfront VM Service

On your local machine, you can open the Ambassador shopfront service mapping from the k8s-config/shopfront_consul.yaml file

---
apiVersion: v1
kind: Service
metadata:
name: shopfront-consul
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: shopfront_consul
prefix: /shopfront/
service: shopfront
resolver: consul-dc1
load_balancer:
policy: round_robin
spec:
ports:
- name: http
port: 80

From examining this file you can see that the resolver specified in the name of the ConsulResolver you defined in the initial Ambassador Service, and the service to route the prefix too is simply named as the service is registered within Ambassador.

$ kubectl apply -f k8s-config/shopfront_consul.yaml

If you now look at the Ambassador Diagnostic console, you should see both the ConsulResolver active, and also the shopfront service in specified in the route table:

If you get your Ambassador external load balancer IP from the earlier kubectl get svc command, and add the Mapping prefix of /shopfront/ to the IP, you should see this.

Tada! You are now routing to a VM-based service via Ambassador and Consul.

Once you’ve finished exploring the playground don’t forget to run terraform destroy to delete the environment and save you from receiving a large bill from GCP!

Conclusion

Within this tutorial you have successfully deployed both Ambassador and Consul to a Kubernetes cluster, and routed traffic from Ambassador to an externally hosted (out-of-cluster) VM via the Consul service discovery mechanism.

This is a get the benefits of handling ingress traffic with a cloud native API gateway like Ambassador, while using the platform-agnostic service discovery features of Consul to avoid having to move everything to Kubernetes.

To learn more, check out the following resources:

Or join our Slack Channel to ask questions and learn from the community!

--

--

DevRel and Technical GTM Leader | News/Podcasts @InfoQ | Web 1.0/2.0 coder, platform engineer, Java Champion, CS PhD | cloud, K8s, APIs, IPAs | learner/teacher