SERVERLESS USE CASES

Self-Serverless: Why Run Knative Functions on Your Kubernetes Cluster?

Six patterns driving the adoption of FaaS for dev and ops

Daniel Bryant
Ambassador Labs
Published in
5 min readAug 27, 2019

--

One of the highlights at KubeCon EU for the Ambassador Labs team and myself was the reveal at the end of the “Extending Knative for Fun and Profit” talk by Matt Moore and Ville Aikas that they had replaced Istio with the Envoy-based Ambassador API gateway for their Knative demonstrations.

After this talk we had several attendees pop by the Ambassador booth and ask questions about the Knative + Ambassador integration. We saw this trend continue via our Ambassador OSS Slack. This eventually led to the creation of a much more polished integration with Knative in Ambassador version 0.73.

Knative’s dependency on Istio has been a point of contention for quite some time, and as dependencies go, this one is quite a high cost. As useful as Istio may be, adding a service mesh into your tech stack is not something you should consider lightly, especially when all you want to do is explore function-based serverless approaches to writing applications. This didn’t stop some early experimentation with the Knative framework, which is continuing now into several clear use cases for development and operations.

Developer use cases for Knative

Broadly speaking, the developer use cases for Knative fell into three categories:

Replace glue/aggregation functions with Knative and k8s workflow

Function as a Service (FaaS) offerings have become popular to deploy and run services that “glue” functionality together. Two examples are:

  1. A simple process that watched a message queue and called other services based on the message payload (in a similar fashion to the classic Message Router EIP), and
  2. Creating an API aggregation, or “request batching”, service that exposed a single API endpoint that returned data via the internal orchestration of multiple batched requests to additional upstream services, the aggregated responses.

Although the cloud vendor FaaS offerings have tight integration with services and data stores running on their platform, one of the primary challenges for engineering teams is that the developer experience and workflow for deploying cloud-based FaaS is different than that for Kubernetes. If you’ve already invested in training engineers to work with Kubernetes, then it is an additional time and money cost to also train them to work with a FaaS offering, not to mention the increase in cognitive load for engineers that comes with using two different platforms.

Build smaller microservices as functions

Several engineers discussed their desire to deploy simple event-driven functions without provisions (or run) an entire microservice/app framework, like Spring Boot or Rails. As useful as these frameworks are — particularly when building microservices-based around business contexts — they can add unnecessary overhead for simple integration use cases. Knative provides “just enough” framework to deploy and manage the lifecycle of a very simple microservice or “nanoservice” using the primitives provided within modern language stacks.

Deploy high-volume functions, cost effectively

Many of us love public cloud tech, but one of the associated challenges that we frequently bump into is calculating running costs. It is undeniable that pay-as-you-go serverless offerings can be very cost effective for certain use cases, such as short-lived, bursty workloads (and this fantastic serverless use case talk from Gojko Adzic is well-worth watching).

However, for longer-running or high-volume functions, the PAYG serverless charging model can get expensive when this is run at scale, particularly when you combine the cost of the serverless runtime and the additional cloud services that are required.

Running Knative on your own hardware, or even running this via Kubernetes deployed on cloud VMs, can enable the easier costing of execution when you know that you will be running a service that has high-volume traffic.

Operational advantages of using Knative

Provide the fundamentals of a self-service platform

Operations and platform teams are often under-resourced within an organization, but they provide the essential foundations on which development teams use to deliver value to customers. The Knative platform provides “just enough” platform components to allow ops teams to expose this to customer-facing developers.

The Knative serving and eventing primitives provide functionality for deployment and the consumption of data, and new open source projects like Tekton Pipelines provide primitives (defined via Kubernetes CRDs) that allow the easy creation of continuous delivery pipelines for Knative, much in the same way that GitHub Actions are trying to do in general.

Use spare capacity in existing clusters

The deployment of high-volume functions, cost-effective advantages mentioned previously, and the costs for running Knative on your own infrastructure become even more attractive if you have existing spare capacity or workloads that are “bursty” or prioritizable.

Running Knative functions using spare capacity that would otherwise go unused is effectively “free” (if you ignore the upfront expense and over-provisioning), and if the functions are processing non-time-sensitive workloads, then they can be scaled down based on increases in high priority real-time requests.

Examining the internals

Being able to “look under the bonnet/hood” of Knative, and explore how your functions run within Kubernetes, inspired an interesting discussion. In certain use cases, engineers are more than happy to defer platform operation to a third party with the tradeoff of the opacity of the runtime and underlying hardware, but the argument was made about not being able to get low-level access to debug a cloud serverless offering.

Suppose an operations team manages everything from VMs to the Kubernetes cluster and Knative components. In that case, the expense of managing this can be weighed against having unrestricted access to all levels of the platform when exploring any issues.

Obviously, the operations team will need the appropriate knowledge and skills. Still, they can observe the actual hardware performance characteristics, add monitoring agents, and set breakpoints throughout the code.

The future is (probably) serverless

Many articles discuss how “serverless” technologies and the FaaS approach to building systems are the future. There are undoubtedly a lot of advantages to using a fully-featured platform that enables developers to deploy and operate their code efficiently. However, one of the main challenges currently exists is whether to run a serverless platform yourself. Ultimately, this decision is related to your use cases, existing technology commitments, and desire to invest in certain skills. Still, in this article, I have attempted to highlight some of the development and operational use cases we have seen for using Knative.

I would be keen to hear what you think about this — please comment, tweet, or join me in the Ambassador OSS Slack, and share your ideas. You can also check out the docs for Ambassador’s Knative integration

--

--

DevRel and Technical GTM Leader | News/Podcasts @InfoQ | Web 1.0/2.0 coder, platform engineer, Java Champion, CS PhD | cloud, K8s, APIs, IPAs | learner/teacher