Comparing Serverless Architecture Providers: AWS, Azure, Google, IBM, and other FaaS vendors

According to the RightScale 2018 State of the Cloud report, serverless architecture penetration rate increased to 75 percent.  Aware of what serverless means, you probably know that the market of cloudless architecture providers is no longer limited to major vendors such as AWS Lambda or Azure Functions. Now we have a range of cloud providers to choose from. But, why would anybody switch to serverless architecture? And what is the difference between all those providers and services they offer?

Where does serverless come from?

To answer that question, let’s roll back a bit. Fourteen years ago, cloud technologies began being adopted in IT. The market had to change rapidly, as every year brought new approaches to app development. First, businesses mostly utilized the IaaS (Infrastructure-as-a-Service) approach. It entailed renting servers and moving the infrastructure to clouds, but teams still had to deal with server setup. Then came the gradual dismissal of manual server operation, and PaaS (Platform-as-a-Service) appeared. PaaS providers offered a more complete application stack, like operating systems and databases to run in the cloud and be managed by the vendor. But that wasn’t enough.

Allocation of server-side administration in Backend-as-a-service technologies.

Skipping several stages of Backend-as-a-Service development, in 2014, we finally ended up with serverless. Serverless or FaaS (Function-as-a-Service) represents a new approach to application development. In a nutshell, FaaS is a form of serverless computing that uses an infrastructure fully managed by a provider to upload functions and run them on a pay-per-request basis. Unlike other approaches to cloud computing, serverless completely abstracts developers from servers and allows them to focus on business logic. To get a deeper understanding of the FaaS concept and the pros and cons of serverless, read our dedicated article.

Benefits of serverless architecture

Serverless architecture, in spite of its popularity, would not fit each and every company or product since it is an approach that best serves a specific set of use cases. So, enumerating benefits of going serverless may vary because they depend on the vendor type (open-source or public) and the stack of serverless services offered. But, generally the unambiguous benefits of serverless architecture are:

Lower costs and scalability. There are several reasons why it is cheaper to launch your app on a FaaS basis. Compared to the traditional approach, it reduces costs of server operations and maintenance. Compared to other types of cloud computing, most FaaS providers work on the pay-per-request pricing model. This means you only pay for the time a function was invoked, and for the number of invocations. In addition, you can allocate a certain amount of memory and CPU for a function, and scale it as needed up and down.

Faster development and deployment. Instead of writing a monolith structure, FaaS offers a more flexible alternative. Developers can write code for a set of functions, instead of the whole monolithic application, and upload bits of code to the server. That makes the whole structure easy to bugfix, update, and add new functions.

Reduced expenses on human resources. Maintaining no servers means hiring no DevOps engineers for maintenance or buying specific hardware.

High availability and auto-scaling. A function becomes active when the client-side requests it. A function can also run after multiple requests, but still shut down when it is unneeded. With the growing traffic, service will automatically scale the resources allocated for a certain function. That approach makes FaaS highly available and provides smooth performance under heavy loads.

Focus on business needs. Abstracting developers from server-side work allows your team to focus on the business logic of your application.

Serverless architecture providers overview

In 2018, the roster of serverless architecture providers increased as new players had been entering the market for the two previous years. Providers can be divided into the major and minor groups. The major group consists of the biggest public cloud providers of serverless architecture. In this article, we’ll compare the main players and mention some alternatives to them.

AWS Lambda. A FaaS offering that belongs to Amazon Web Services was introduced in 2014. Since its release, Lambda became synonymous with what serverless means, holding the position of the leading product on the market with the widest range of services available. Probably, the most well-known example of public serverless adoption is by Netflix.

Azure Functions by Microsoft. The service launched in 2016 to compete with AWS Lambda. Azure Functions offers a similar set of services to Amazon, with a focus on the Microsoft family of languages and tools. One of the examples of using Azure Functions is Have I Been Pwned. If you are interested in the application structure and how it performs on Azure, you may check the volume report containing detailed information on analytics and expenses.

Google Cloud Functions (GCF). One of the four largest, Google released its solution only in 2017. GCF service used to lag behind Azure and Lambda, but during 2018, Google managed to fix earlier mistakes as evidenced by GCF release notes.

IBM Cloud Functions. Relatively new to the serverless genre, IBM stepped into the game with a competitive set of services to offer. IBM Cloud Functions is the only managed infrastructure solution by OpenWhisk within their cloud services. But if you prefer an open-source solution, Apache OpenWhisk would be a more suitable option.

All mentioned providers offer similar services, enough to launch an application on a managed infrastructure. They also propose sufficient capabilities to get all the benefits of the FaaS concept but may perform differently. To figure out the best option for you, let’s compare available services using the following criteria:
  • Pricing models and billing factors
  • Programming languages supported
  • Function trigger types
  • Execution duration per request and concurrency
  • Deployment methods
  • Monitoring and logging methods

Major FaaS providers compared

Pricing models and billing factors

As mentioned earlier, most FaaS providers use the pay-per-request pricing model, which is quite cost-effective. To calculate the costs of your app, there are services that predict your potential expenses pretty accurately. Serverlesscalc is a tool, which is currently in beta, designed to calculate the costs specifically for the Big 4 of serverless providers. But, every provider has its own calculation tool:

All vendors provide similar pricing, however, Google’s model is the most expensive one due to separate billing for memory and CPU usage.

Lambda offers a free-tier, that includes 1 million requests and 400,000 GB-seconds of computing time per month. All the requests that go beyond the limit of a free-tier are billed at $0.00001667/GB-s, which is the lowest price on the market. In real-world practice, free-tier allows running your app long enough before billing starts. Resources allocated (memory and CPU) are billed as a single unit, because both are growing proportionally. Additional expenses may be derived from usage of other AWS services within your Lambda function.

Azure is billed the same way as Lambda, with the only difference of $0.000016 GB-s, but free-tier is identical. The heavy-load costs on Azure are a little bit lower than Lambda and equal to Lambda for the average load. But Microsoft prefers to bill consumed memory rather than allocated.

Azure also offers lower pricing for Windows and SQL usage, which is pretty logical. So, the choice between the two depends on the environment you use more than the costs you incur.

GCF free-tier is 2 million requests per month with the same 400,000 GB-s, and $0.0000004 per request after it, with networking traffic included. Considering the length of time a function runs and the number of requests, expenses with Google Cloud Functions is notably higher. As for the resources, GCF is different, because they bill allocated memory and CPU separately.

IBM has its free-tier similar to Lambda and Azure at 400,000 GB-s and 1 million requests. Pricing above the threshold is calculated at $0.000017 GB-s per invocation. As for the billing factors, IBM OpenWhisk bills the resources consumed while the function was active.

Summing up, AWS Lambda proposes a middle-ground in pricing, while Azure can vary in expenses, dependent on CPU and memory used. But for Windows environments, Azure offers the lowest price.

Programming languages support

The FaaS provider is a public cloud, which means you are running your app in a managed environment, and every vendor offers support for different languages.

Lambda covers a wide range or programming languages including Node.js runtime, Python, Java, and languages compiled to it, and .NET languages (C#, Visual Basic and F#).

Azure Functions obviously keep the focus on Microsoft’s family of languages and lists JavaScript and languages compiled to it, Node.js runtime, C#, F#, Python, PHP, Bash, Batch, and PowerShell.

Google Cloud Functions used to support only JavaScript, but it was announced that many other languages are going through beta testing. So that in a long-time perspective, GCF service has a chance to keep up with other major vendors. But, as for now, it doesn’t look like a reliable choice.

As for IBM, the service currently supports Node.js runtime, Swift, Java, PHP, and Python. But it is possible to integrate any programming language with Docker containers.

Table of supported languages. Currently, GCF has the most limited support but they plan to have more languages available soon.

Azure and Lambda support more languages than other providers, while Google just announced more languages in the future. And comparing Google vs IBM the latter still may use any language via containers, there would be no difference except learning how to containerize your code.

Trigger types

Major vendors offer different configurable and dynamic trigger types, that can be used to invoke a function. Triggers are provided with the help of other cloud services every vendor has. In general, all major vendors support scheduled invocation of a function, calls on demand, and integrations with other cloud services respectively. You may find more detailed information in the user documentation of each provider.

Lambda and Azure offer trigger types configured via API. For AWS these are API Gateway for API triggers, file-based trigger via Amazon S3, and dynamic triggering performed via DynamoDB. Azure proposes web API triggering, scheduled invocation, and trigger types performed with other Microsoft services, Azure Event Hub, and Azure Storage.

GCF service provides a list of the main supported triggers and additional triggers in its documentation. The main feature of GCF trigger types is that your application can be integrated with any Google services, supporting cloud Pub/Sub or HTTP callbacks.

IBM seems to be a dark horse in case of triggering method variety, as it doesn’t have many third-party services to integrate with. Nevertheless, it still supports common trigger types as browser-based HTTP tool invocation and scheduled triggering.

So, if choosing the vendor on the basis of triggering methods, the best choice would be Azure or Lambda, as they propose more types and integration with their cloud services. GCF takes third place in case you need to trigger functions via Google services.

Execution time and concurrency

Another important aspect relevant to the invocation of a function is the time allowed for a function to be active, and concurrency. Concurrency stands for parallel execution of different functions in a time period.

The best concurrency rate is by Google, but if you look for long execution time AWS Lambda would be your best choice.

Lambda limits the concurrency rate to 1000 executions at a time, with maximum execution time of 15 minutes. Concurrency can be configured for the whole account or for an individual function.

Azure offers an unlimited concurrency rate within one app but limits maximum execution time for a single function to 5 minutes, and 10 minutes as an upgrade.

GCF allows for an unlimited number of invocations for the HTTP trigger type, which is a good option. As for other triggering methods, concurrency is the same as Lambda proposes, 1000 executions at a time. Execution time for a single function is limited to 60 seconds, but can be upgraded to nearly 9 minutes. It is important to mention that AWS Lambda counts concurrent functions within an account, while GCF does the same within a project. That means, on AWS, you may run only one function with 1000 concurrent invocations, while on GCF it is possible to run multiple functions with the same concurrency.

IBM has no time limit for a single function invocation. Concurrency rate remains unclear. As it is mentioned in IBM Cloud Functions documentation, there are no guarantees that functions may run concurrently:

Two actions can run concurrently and their side effects can be interleaved. OpenWhisk does not ensure any particular concurrent consistency model for side effects.”

So, if you want your functions performance to be smooth, there is no critical difference between Lambda, GCF, and Azure, while IBM seems to have no clear information about concurrency rate. But if you are focused on a long-time invocation, Lambda and IBM would be better choices.

Deployment methods

There seems to be no difference in deployment methods across vendors. In general, for deployment in a serverless framework, a developer will use serverless.yml to configure functions or changes made to it. Then the code of your functions is packed into Zip files and pushed to the server.

Lambda would update each separate function of your app, while Azure, GCF, and IBM services tend to parse serverless.yml via a plugin and upload resources with the difference in order.

Monitoring and logging

Monitoring is necessary for serverless computing because all infrastructure is managed by a vendor. So, to see what exactly happens with your application and apply metrics, each service has to offer monitoring/logging tools. This allows you an overview of the resources allocated and used, detect bugs, monitor logs, etc.

Amazon released its own tool – CloudWatch – for their Lambda service that helps to observe function invocations and logs. But, CloudWatch faced criticism for its limited functionality, considering the fact that it is a paid service. There is also Microsoft Monitor, Stackdriver by Google, and IBM logging and monitoring.

Another service by Amazon is X-Ray, a distributed tracing system for various AWS services. It seems to perform well, but its main purpose is to monitor microservice applications, rather than functions. To perform monitoring of a serverless app, there are also third-party options:
  • Dashbird is a free service for AWS monitoring that offers additional functionality for CloudWatch and a more user-friendly interface.
  • OpenTracing is a vendor-neutral monitoring service, but not a tool. The service has to be configured for a certain vendor. OpenTracing supports 9 languages: Go, JavaScript, Java, Python, Ruby, PHP, Objective-C, C++, and C#.
  • Thundra hasn’t been released, but it is already available in beta. The main feature of the service is that it will keep the focus on JavaScript, and present monitoring and logging features based on the experience of using AWS X-Ray, and integrate with it.

Other options to consider

As the comparison shows, all the major serverless stack providers propose relatively equal infrastructure services. AWS Lambda and Azure Functions are still the most complete and diverse services to work with. So, the choice depends more on the environment you want to work in, programming languages support, and community.

The general negative point concerning all public cloud providers is vendor lock-in. Having the infrastructure maintained for you brings a lot of benefits. But if your product requires more control, consider open-source serverless frameworks:

No-vendor lock-in and deep integration options. IronFunctions is an open-source serverless computing platform, supporting private, public, and hybrid clouds. The idea behind the product is to support developers with a serverless platform that can be run anywhere. IronFunctions uses Go language, which isn’t a common choice among other serverless options.

For container-driven architecture. Oracle Fn Project is an open-source serverless platform that is native to containerization and is fully agnostic to languages and cloud environments. Similar to IronFunctions, Fn Project can be used on public, private, and hybrid clouds.

Another option for a container-driven infrastructure is Kubernetes with its native-serverless framework for code deployment called Kubeless. Kubernetes represents an open-source system for automating deployment, management, and scaling of containerized applications.

Google-driven mobile development. Firebase is a back-end platform for mobile application development with the managed infrastructure by Google, which smoothly integrates with other Google Cloud services and would be the best fit for Google-driven products.

For mobile or single page applications. webtask is a completely free FaaS platform. webtask fits best for mobile applications that don’t require heavy backend and supports various integration scenarios.

Choosing an open-source serverless framework assumes operating and maintaining the infrastructure on your own, but also allows for more control of your app.

Comments1

Sort by