A serverless glossary

Nica Fee

With Serverless, it's not the technology that's hard, it's understanding the language of a new culture and operational model. Serverless architecture has coined some new terms and, more confusingly, re-used a few older terms with new meanings. This glossary will clarify some of them.

App

An app is a term without an exact technical definition, it’s a general term that includes ‘web services’ and ‘things running inside private networks that only do things for your organization.’

Technical genius Naomi Rubin spoke recently about how it’s hard to define exactly what a web developer makes: how is one project, where you create a complex browser experience, like another where you create a frontend for a massive database of scanned documents? Her conclusion was that we should call what we make “wobsites”. For now, we’re sticking with ‘App’.

Cloudside

Where cloud resources and environments exist. Where we instantiate a configuration of cloud services to deploy our app's architecture for a development environment.

Cloudside Development

In traditional web development, we often start with a ‘local’ copy of every part of our application. We run a database server, media storage, and of course web application framework, all on our local machine. This local emulation always had some gaps, I’ve never met anyone who developed with a local version of caching or a CDN on their machine!

As serverless has grown in depth, it’s become apparent that this local emulation is no longer possible. This points towards a model of cloudside development where the process of writing and running our code involves making requests to resources in the cloud from the very beginning.

This reveals a serious problem for serverless development: it takes time to deploy code to the cloud. If we’re waiting for our serverless function code to become available in the cloud to even see if it works, suddenly our development process has slowed way down.

Thankfully, Stackery has just released tooling to handle this dichotomy, letting you write your code locally, run it locally, but still rely on cloud-hosted resources, even letting your locally-running serverless functions call other functions that are in the cloud! We call it Cloudlocal, try it for yourself.

Deploy

As mentioned above, the first serverless function you run will probably start with some clicks in the AWS console. But as your functions come to rely on other services, it will get more difficult to change function code without also reconfiguring the rest of your serverless stack.

We need to think of all these changes as a bundle of things that all need to happen at the same time. Hence a deploy, when all environment parameters are set, resources configured, and code changed all at once.

In order to set up a dev environment, we also deploy cloudside resources without code in order to create a set of resources we can develop against. The fact that this deploy happens without code makes getting started with serverless mentally different for many developers.

Environment

As you develop your web service, you will want to create a demonstration database full of fake customers and other data. Once you have real users you’ll store their data in another database. This is just one way that we come to have a sense of different ‘environments’.In this example, our application code (see Functions below) can run in multiple environments without any changes. We shouldn’t have to change the code to say ‘now look up real customer accounts instead of the table of Game of Thrones characters I set up when I was writing this application’.

As our app gets more complex, the environment is not a single setting, like which database to use, but a whole bundle of configuration. For convenience, tools like npm have built-in settings to say “use this environment” to switch a whole bunch of things at once. Surprisingly, Amazon Web Services (AWS) has no built-in concept of an environment. That means there’s no clear way to flag that a Lambda should switch from one environment to another, along with all of its attached resources. You can give your Lambda an environment variable to have it change context, but that same environment variable won’t propagate to the API gateways, databases, and other components of your stack. Thankfully this is one of the many aspects of serverless development that Stackery can handle for you.

The most common environment labels are Development (“I am working on making my app do new things”) and Production (“the real application my users use”). Almost all organizations have some kind of Test or Staging environment where the changes you developed are tested in a setup that’s slightly more real. This has made a lot of people very angry and been widely regarded as a bad move.

Environment Parameters

Reading the definition of environment above might have begged a question for developers: how can you change environments without changing your code? Doesn’t your code have to say ‘go look in this database’ with the name of that database?

In local development (when you’re writing a web app on your laptop), this is handled with environment parameters, points where you say ‘when I start this program, I’ll give you some variable definitions. When running, just check these variables and I’ll have set them based on your environment.’

The concept is the same with serverless functions: the AWS Lambda dashboard even has a little area to set environment parameters in the UI. You can also use environment parameters to provide pointers to resources connected to your serverless functions.

Stackery saves time and irritation in this area by letting you set a bunch of environment parameters all at once, and automatically providing parameters for any resource connected to your function.

Functions/Serverless Functions/Lambdas

A serverless function is where you’ll write your application's code. Sometimes the word “small” comes up in these definitions, and since functions don’t need explicit code for handling requests it’s true that the code for a function should be more compact than the exact same logic written as a traditional web app. But like a web app, you can still import code packages and write thousands of lines of code as needed.

It’s often stated that “functions are where your business logic goes” and while they certainly contain business logic there are also key concerns about handling that exist outside of your lambdas. See Nate Taggart’s article on serverless without functions.

Okay, that’s a few things that functions _aren’t. _What are they? They’re event handling code that is completely modular and disposable. Functions have no stored state, and they should scale horizontally very easily, i.e. when you get 10 times as many requests, it should be trivial to spin up 10 times as many functions.

AWS is extremely dominant in the serverless space currently, so their product name “Lambdas” is often a generic term for serverless functions. This can occasionally cause some confusion with the computer science concept of Lambda calculus.

Infrastructure-as-Code

The underlying layers below your application code (operating system, hardware, networking) used to be implicitly handled separately from the application code. How much memory would be available to your app? It depended on how much memory your IT person installed on the server. What version of the Java Virtual Machine would be in use? Ask your Ops person.

This of course led to conflicts that made applications fail at runtime. The initial solution was documentation: massive internal wikis that listed what each machine needed to run successfully.

But as servers virtualized, they no longer needed someone to go install more physical RAM to increase memory, and with the advent of containerization, the operating system layer was quantified in a configuration file.

Infrastructure-as-code is the culmination of these efforts. There are massive benefits when all the underlying layers for your app are quantified and communicated in a file that is managed with version control along with the rest of your application code.

localhost

A place where you can't get consistency with live cloud resources, but where we learned to quickly iterate our code.

Permissions

A generic sounding term for the biggest configuration hurdle of serverless. Permissions covers what both users and resources can do.

  • For users, AWS controls what you can and can’t deploy, and what permissions you can give to your resources
  • For resources (see below for a list, but the short version is ‘Lambdas and more’), permissions control what resource can talk to what others, and can get more fine grained than that, e.g. “this Lambda can request to read items from this DB, can write new records to this other DB, and can read and update but _not _create new records on this third DB”

Resources

A general term that means “anything that isn’t functions.” The first two resources most serverless stacks will include an API gateway to let our function accept web requests, and a database to store things, but an incomplete list of resources would be (with AWS examples):

  • Storage (S3 buckets)
  • Databases (DynamoDB)
  • Event ingestion (API Gateway)
  • Queueing/messaging (SQS endpoints)
  • Long-running task containers (Fargate)

In general Resource is synonymous with Service when planning a serverless stack. Both are ‘something that your code relies on for the app to work.’

Secrets and Credentials

Closely related to Environment Parameters, some of the environmental information that we need to give our app needs to be held securely. These secrets shouldn’t be stored in the application code (we certainly don’t our API secrets public in a GitHub repository!). On AWS, secrets can be stored in AWS Secrets Manager.

Serverless

Serverless refers to a model where you let a service provider handle all but the most abstract management of the underlying layers of your web service. You don’t pick a virtual machine’s operating system, or handle the network ports for your application, you don’t do anything but write the code referring to what you actually want to do with your web service.

Of course, there are servers involved somewhere it’s just that you don’t manage them directly. As Yan Cui put it in his course on production-ready serverless, serverless is free of servers the way wireless is free of wires: there are wires somewhere you just don’t have to worry about them.

Serverless is often used to mean solely serverless functions, but any component where you don’t control or configure the underlying server config is serverless, so a database where you just define your data structures and don’t worry about spinning up servers is also serverless.

There exist tools to run ‘serverless functions’ inside a server that you host and configure. The scare-quotes here are intentional! The whole point of serverless is letting a vendor worry about network config and OS software updates. If you’re doing that stuff yourself, or anyone within your organization is, you’re not doing serverless!

Serverless Application Model (AWS SAM)

The open-source template standard for describing your serverless stack. See: CloudFormation

Stack

A stack is a combination of serverless resources that interact. In general, requests and responses in a web environment should be entirely handled within one stack. From a technical perspective, a stack is the group of resources described by a CloudFormation template.

Some disambiguation: In the more general world of web development, “what’s on your stack? is asking about ‘what languages, web frameworks, database, and (sometimes) server OS are you using?” Specific combinations get an initialism, e.g. the “LAMP Stack” refers to Linux operating system, the Apache HTTP Server, the MySQL relational database management system, and the PHP programming language.

Related: Active Stack

At Stackery, we have a free tier that lets every developer get started deploying a complete serverless stack in minutes, and managing multiple environments and deploying updates with just a few clicks.

If you love using Stackery, our plans charge based on active stacks. That is the number of complete serverless stacks you have running that you use Stackery to deploy and manage.

YAML

YAML was long held to stand for “Yet Another Markup Language,” and while the official version now is that it stands for “YAML Ain’t Markup Language,” the distinction is to me unclear.

YAML files are more readable than XML and eschew most punctuation in favor of whitespace. It’s used quite a bit among Rails aficionados, but it matters to all Serverless application creators since YAML is the format used in the Serverless Application Model (SAM) that CloudFormation ingests.


Amazon Web Services (AWS) terms

AWS API Gateway

Right after you finish writing your first AWS Lambda, if you’re anything like me, you’ll try to open a terminal and send it a curl request. Your Lambda accepts requests (okay technically they’re called “events” but still), you wrote it to provide useful responses, time to send a request!

...all you need is your Lambda’s url….

It turns out that Lambdas can only accept events from other AWS components. You can’t find their URL because they’re not publicly available. So how do you ping one?

AWS API Gateway does a number of other useful functions but for serverless it’s critical because it accepts requests and turns them into events that Lambdas and other components can consume.

CloudFormation

When you run your first Lambda, getting started requires a few clicks within the AWS console. But this doesn’t scale: you’ll need a way to programmatically deploy Lambdas and the resources that support them. CloudFormation is AWS’ tool to do just that: letting you set up a series of changes to deploy all at once to a particular stack.

CloudFormation uses templates to represent your stack, which are written in the Serverless Application Model (SAM) format. This YAML-written template format is open source and can be used by other tools, including Stackery, which offers a visual editor to create and manage your CloudFormation templates.

Lambdas

The product that introduced serverless to most people’s awareness, AWS Lambdas are small snippets of code that run when ‘triggered’ without any need to configure a server to host the code.

Lambda Layers

You can configure your Lambda function to pull in additional code and content in the form of layers. A layer is a ZIP archive that contains libraries, a custom runtime, or other dependencies. With layers, you can use libraries in your function without needing to include them in your deployment package.


Stackery terms

Stackery Cloudlocal

A set of capabilities Stackery offers to enable inner loop local development against live cloud resources.

Canvas & Builder

How you architect, configure and turn on your dev environment's cloud resources.

Builder & Operations Console

How you package functions and service configurations for deployment, integrate with CI/CD processes, manage dev, test, staging, and production promotion, avoid collisions and drift, and perform basic monitoring.

Stackery Cloudlocal Inner loop

Rapid function code development (5 to 12 seconds).

Cloudside Outer loop

Frequent architecture and resource configuration development (several times a day or week when under active development).


If you’ve made it this far

Congratulations, you should be ready to talk the lingo of serverless development. If you haven’t already, create a free Stackery account and get started creating your serverless apps.

Related posts

Easy Slack Integration
ServerlessEasy Slack Integration

© 2022 Stackery. All rights reserved.