INDUSTRY TRENDS

Five Takeaways from HashiConf US 2019: Building Infrastructure in a Multi-* World

The importance workflows, SaaS, dev/ops, and community

Daniel Bryant
Ambassador Labs
Published in
6 min readSep 13, 2019

--

The Ambassador team and I visited the fifth HashiConf US conference, delivered a presentation about implementing end-to-end security using Ambassador and Consul, attended many of the talks, and chatted to lots of our fellow attendees. There was a lot of great information and many experiences being shared, and here are my top five takeaways from the event:

  • The World is Going Multi-{Cloud, Platform, Service}
  • Focus on Workflows First, then Tools
  • Avoiding Undifferentiated Heavy Lifting is for HashiCorp Tools, Too
  • Platform Teams Continue to Learn from Software Engineering (and Vice Versa)
  • People, Community and Ecosystem: The Core of HashiCorp

The World is Going Multi-{Cloud, Platform, Service}

I first heard the statement “the world is going multi-cloud/platform/service” from HashiCorp co-founders Armon Dadgar and Mitchell Hashimoto at HashiConf EU earlier in the year, but they reiterated the trends they see within the industry and once again stressed the importance of designing and delivering software and the supporting platforms with this in mind. In relation to this, the HashiCorp Cloud Operating Model was once again on full display, and I believe that this is a very useful tool to help understand and plan the evolution of applications and infrastructure.

A recurring theme in several conversations I had with fellow attendees was “choosing the right tool for the job”, with the “tool” variably referring to infrastructure, architecture, and code. This event obviously consisted of a self-selected audience of HashiFans, but it’s still worth mentioning that there was a decided pushback in relation to an organization attempting to select “one cloud to rule them” that I’ve heard at previous events.

Even engineers from large organizations were not looking for full workload portability (i.e. being able to deploy each app to multiple clouds), and instead they wanted to leverage different clouds and platforms to meet different requirements e.g. AWS EKS for microservices, Azure functions for event-driven simple batch processing, and Google’s data engineering and machine learning services..

There were also two patterns of adoption of HashiCorp tooling I observed from engineers that I chatted to:

  • Infrastructure-driven — in which platform/ops teams adopted Terraform and Vault to deploy, configure, and secure infrastructure, that in turn provides a platform for applications to be deployed onto
  • Runtime-driven — where developers adopted Consul for service discovery (or key-value storage and access to distributed system primitives like locks etc), or Nomad for deployment and workload management.

Regardless of how HashiCorp products were integrated, when one group successfully adopts the tools, often the other gets tempted to explore the additional tools.

There were also a couple of interesting stories shared in relation to organisations undertaking a migration from one infrastructure to another using Consul as a service mesh. This functionality within Consul is relatively new — indeed, the layer 7 routing and observability support and cross-datacenter mesh gateway functionality was only announced within the past few months — and so I would expect the number of these stories to increase over the coming years, as we not only see migrations but also hybrid infrastructures.

If you want to learn more about potential strategies for a multi-infrastructure/platform migration, then I published a four part guide as part of my HashiConf talk that focused on API gateways, service meshes, and security.

Focus on Workflows First, then Tools

In the previous takeaway, I mentioned that the engineers I spoke to weren’t typically looking for full workload portability (i.e. the ability to run every application everywhere), and instead what folks appeared to be looking for was a consistent way to build, deliver, and manage the infrastructure. In a nutshell, they were looking for a consistent workflow across multiple clouds and platforms, and the HashiCorp tooling suite is primed for this.

The obvious tool to mention here is Terraform, and as an example of this principle, although many of the configuration stanzas differ across the various clouds, the workflow of defining, planning, and applying the configuration is identical, regardless of platform. Increasingly this principle is also being applied to Consul, as the extension of the Consul Connect service mesh functionality and adoption of the Envoy Proxy means that the workflow associated with traffic routing, shifting, and splitting is consistent across platforms.

I’ve experimented with this myself, and have created a tutorial on how to route traffic across Kubernetes and a series of VMs with Ambassador and Consul on Google Cloud Platform. My buddy Nic Jackson, developer advocate at HashiCorp, has also talked extensively about this, and he has several demo repositories on GitHub.

Avoiding Undifferentiated Heavy Lifting is for HashiCorp Tools, Too

Two of the biggest announcements from this HashiConf was the new HashiCorp Consul Service (HCS) on Azure, and the extended functionality included in the free tier of Terraform Cloud.

Both of these announcements extend the HashiCorp tooling “as-a-service” offerings, with HCS effectively being a fully managed Consul service mesh offering, and the ability to run the full Terraform workflow remotely in Terraform Cloud — i.e. plan and apply in addition to remote state management — now makes this an easy on-ramp to a fully managed infrastructure as code (IaC) delivery pipeline (with the logical end game for big organizations being Terraform Enterprise).

HashiCorp tools have always been focused on improving the developer experience with IaC and making operations easier, but in the past you typically had to roll up your sleeves and deploy and manage a large part of the Consul and Terraform platform yourself. Not so, any more.

Platform Teams Continue to Learn from Software Engineering (and Vice Versa)

Two very interesting customer talks I saw at HashiConf were delivered by the Starbucks Technology team and Criteo. I’m sure most folks have heard of Starbucks (and the conference was held in Seattle!), but for context, Criteo is a web advertising real-time bidding platform. Both talks clearly demonstrated the positive impact of shared learning across dev and ops.

In the talk “Infrastructure as Code for Software Engineers”, Mike Gee and Ryan Hild, shared their journey of how they used their experiences as developers to evolve the platform code, tooling, and processes associated with managing the infrastructure behind their web platform. Key takeaways for me included: embrace modularisation of declarative infrastructure code; define standardized workflows, tooling, and scripts; create an effective delivery pipeline; invest in testing (following the test pyramid); and expose programmatically consumable metrics.

The talk “Inversion of Control With Consul”, by Pierre Souchay, shared the experiences of the Criteo team as they moved from VM-based infrastructure to a container-based Apache Mesos platform. Here, the ops team worked with developers to simplify the management of application metadata, track service updates, and improve application maintenance. Key takeaways from this talk included: keep information close to where you will use it; defining and effectively managing metadata is essential for understanding your system; automating the rollout, and clean up, of application configuration is essential when operating at a large scale or within a highly dynamic environment; and focus on defining configuration that is business- or semantically-relevant, rather than tool-specific.

Community and Ecosystem: The Core of HashiCorp

The final topic I wanted to highlight is the continued commitment by the HashiCorp team to fostering a diverse and collaborative community. Through my work at Ambassador I have seen that this commitment extends outwards to the partner ecosystem, and from my work in open source I have seen how this continues further to cover all of the users of HashiCorp technology.

The HashiCorp team is growing rapidly, and although practically every member of the organization I have met leads by example, the commitment to sustaining an effective culture was clearly on display in the Wednesday morning opening keynote by Preeti Somal, VP Engineering at HashiCorp.

It’s worth mentioning that the number of HashiCorp users are increasing rapidly, too, and there is now a clear effort to facilitate the sharing of knowledge and experiences via the new learn.hashicorp.com and discuss.hashicorp.com websites. You will definitely find the Ambassador Labs team contributing here (we’ve already asked a few questions), and we want to shout out the existing help we’ve received from Nic and Todd Radel during our Ambassador and Consul integration work.

Wrapping Up

Here at Ambassador, we’re keenly watching these trends to ensure that our Ambassador API gateway and associated edge stack continues to evolve to meet the requirements of cloud-native companies, multi-platform teams, and organizations undertaking an application modernization program.

Ambassador has first-class Consul service mesh support, which allows dynamic routing across platforms, easy implementation of complete end-to-end TLS support, and a workflow for incrementally migrating applications from VMs to Kubernetes.

If you need an out-of-the-box Ambassador setup that includes integrated authentication, rate limiting, and SLA-backed support, then check out Ambassador Pro, our commercial product.

And, if Ambassador is working well for you, we’d love to hear about it; you can reach us via Twitter (@ambassadorlabs) and Slack, or raise issues via GitHub.

--

--

DevRel and Technical GTM Leader | News/Podcasts @InfoQ | Web 1.0/2.0 coder, platform engineer, Java Champion, CS PhD | cloud, K8s, APIs, IPAs | learner/teacher