Migrating a Serverless application backend to the Serverless Framework

Tai Nguyen Bui
The Agile Monkeys’ Journey
10 min readJan 16, 2019

--

In this article you will learn how to setup your project with Serverless Framework for an existing Serverless application in order to make your application easier to manage. Serverless Framework could also make things a lot easier for your automated backend deployments.

During the preparation for an AWS Certification through A Cloud Guru (which we HIGHLY recommend to anyone interested in building their AWS knowledge), we came across an awesome but simple lab that consists of creating a Serverless Application. It was basically a text to audio converter made of the following AWS resources: AWS S3, API Gateway, Lambda, DynamoDB, SNS and Polly.

The diagram below shows the interaction between AWS resources:

Serverless application architecture

Let’s talk a little bit about the architecture for this application:

S3 Bucket hosting a static site that makes requests to an API Gateway

API Gateway that serves two different requests:

  • AWS Lambda (1) that retrieves records from DynamoDB
  • AWS Lambda (2) that creates new records in DynamoDB and also creates an SNS Notification

Lambda (3) that is triggered by SNS Notifications and performs four sequential tasks:

  • Retrieve a record from DynamoDB that contains the text to be converted
  • Pass the text to Polly and receive the audio streams
  • Store the audio received into S3 as mp3
  • Update the DynamoDB record with the audio url and new status

In this A Cloud Guru Lab the intention was to learn about how to configure each of the AWS Resources and how AWS Lambda could act as the glue among them. As a result, this Serverless application architecture was configured manually through the AWS Console, and it was awesome and very educative!

The application sounds really cool right?… but what happens if we want to easily replicate the same application in another region, or just make updates to the existing one?

We thought that migrating the above A Cloud Guru Lab to the Serverless Framework could help us a lot, so here we are… showing you how to do it :D

Prerequisites:

  1. Download resources from https://github.com/theam/makerlab-103-Polly
  2. Install Serverless Framework CLI
  3. Have an AWS account and AWS Credentials for your IAM User (Access Key Id and Secret Access Key) setup in your computer
  4. Have a text editor, something like Visual Studio Code can help you a lot
  5. Hunger for learning cool stuff

Let’s get started!

1. Creating the project

First of all, we will need to create a folder for our project. At this point, I highly recommend that you also initialize your git repository so you have your project version controlled.

Now, we are going to copy the following A Cloud Guru Lab resources to our project:

These are the 3 Lambda functions that our project will have. The next task will be to create a serverless.yml file in the root of the folder and add the following code inside:

Note: your service name should not have spaces or special characters, hyphen can be used to separate words.

The above code will define the name of the service, which will also be the name of your CloudFormation stack when deployed. This configuration will also exclude all the files except the ones that you specify to be included for each Lambda function. Furthermore, we are also setting up information about the provider name, the runtime for our functions and default values if stage or region are not given as part of the sls deploy command.

Awesome! Let’s see what happens when we deploy our stack. Run the following command in your terminal:

sls deploy

and you should see something similar to this:

sls deploy output

I bet that you’re wondering what you deployed 😕!

Well… you have successfully deployed a CloudFormation stack. This stack will be containing the application that we will build later on, but for now, it does not have any AWS resources.

Note: If you ever want to delete the stack, with your Serverless application in it, you just need to run in the terminal:

sls remove

2. Creating AWS resources

The Serverless Framework let us easily create the necessary resources for our application. In this example, we will just need a DynamoDB Table and an S3 Bucket.

Before we create the resources, following good practices, we will create variables with their names. Paste the following code in your serverless.yml

All our resources will start by the name of our service followed by the stage, by default dev. This will help us identifying the resources for this application.

Now that we have a name for our Table and Bucket we will paste the following code in the serverless.yml file:

The DynamoDB table will be created with a Primary Key named id consisting of a string and a read/write throughput of 1 unit. This throughput has a capacity of 4KB/s for read and 1KB/s for writes, which for now will be fine. However, if later on you start experiencing timeouts in your functions due to high load of read/writes, I suggest that you update throughput to a higher value or set it to on-demand and redeploy the stack.

We’re also creating an S3 Bucket with the name specified in the custom section.

Let’s deploy our updated stack so we’re able to look at our AWS Console and see that a new S3 bucket and a new DynamoDB table have been created.

sls deploy

We are halfway through!

3. Deploying getPosts function

Since we already have the code for our getPosts function, we just need to paste the following code to our serverless.yml containing the definition of the function:

The handler needs to point to the function defined in getPosts.py file, in this case, it is called lambda_handler. Additionally, getPosts.py file needs to be added to the package and description is optional but recommended. Setting up the memory size is also optional, by default 1024 MB. Environment variables are set under the environment section. Our code consumes an environment variable called POSTS_TABLE, as a result, we will define it in this section with the name of the table that we previously set.

Finally, we will configure the trigger for this function, which will be a GET request through the API Gateway. Since we are setting up a mapping for the request that will be in charge of converting the query parameter postId into a JSON object, we will need to define a template and set the integration to lambda. Additionally, CORS needs to be enabled as it will be hit from a static site. Furthermore, we will also reject any request that does not match our request template.

There is some extra work, you may have already noticed that we have not given any permissions to this Lambda in order to access DynamoDB, as a result, we would experience some issues if we were to deploy the function now. What we need to do is to create a role for the Lambda functions.

The following code will help us define our new IAM role:

This role that will be assigned to our functions is quite permissive since it allows any resource in the AWS account but for the purpose of this tutorial we will be okay with it. DO NOT USE THIS IN PRODUCTION, be more granular in that case.

It is time to deploy our stack again to see what happens.

sls deploy

sls deploy output with endpoints and functions

We are now able to hit our new endpoint by simply typing in terminal:

curl https://<your-id>.execute-api.us-east-1.amazonaws.com/dev?postId=*

The output will be an empty array since we have not put any information in it yet.

4. Deploying newPosts function

Now that we are able to retrieve posts, it is a great time to create a function to generate posts and then use the getPosts function to retrieve them.

For adding the newPosts function to our stack we will do a very similar process, paste the following code below the getPosts function in your serverless.yml:

As you may have already noticed, this is a POST method and requires two environment variables, POSTS_TABLE and SNS_TOPIC. The rest is configured in the same way as the previous function.

In addition to the above, we will need to give some more permissions to the role we already created. We will add the following permission:

- dynamodb:PutItem

it should now look similar to:

Let’s update our stack

sls deploy

Once it is successfully deployed, we can test our new function, type in terminal:

curl — request POST — header “Content-Type: application/json” — data ‘{“voice”:”Joanna”,”text”:”some random text”}’ https://<your-id>.execute-api.us-east-1.amazonaws.com/dev

You will see an error that we will fix when deploying the next function 😅. However, the post is successfully posted and you can now retrieve it by typing in terminal:

curl https://<your-id>.execute-api.us-east-1.amazonaws.com/dev?postId=*

5. Deploying convertToAudio function

Great job so far, we now have a function to create posts and another to retrieve those posts. The last bit will be to configure our last Lambda in charge of converting text into audio and updating the status of our post. For that, we will add our last function to the serverless.yml:

A lot of the above parameters are familiar to you already. However, a few that may catch your attention are the greater memory size, timeout, and an SNS event. The reason for increasing the memory size is due to the tasks that this function will be carrying. If you look into the code of this function, it splits input text into chunks of 1000 characters in order to send them to Polly. The audio streams contained in Polly responses are then appended, converted into an mp3 file, and uploaded to an S3 bucket. Furthermore, since the default timeout for a function in the Serverless Framework is 6 seconds, we will need to increase it to make sure that we are able to finish the task successfully before it times out. Finally, the trigger event for this Lambda is an SNS notification, which is the reason why we pass an environment variable with the SNS topic name in the newPosts function. Serverless Framework will create an SNS topic automatically by just specifying that the trigger is an SNS event.

Since there are a few new actions that this Lambda will be performing, it is time to expand the permissions in the IAM role that we already created.

The final Lambda functions IAM role will look as follow:

There are 6 new permissions that we have added. This is due to the fact that this function not only converts text into audio through Polly but also stores the audio in S3, makes the object public, and updates the record in DynamoDB.

We are almost done, so let’s deploy our stack for the last time:

sls deploy

6. Testing our Application

If you are reading this you have successfully deployed a Serverless application and we are now ready to test it fully:

First of all, we will create a new post:

curl — request POST — header “Content-Type: application/json” — data ‘{“voice”:”Joanna”,”text”:”some random text”}’ https://<your-id>.execute-api.us-east-1.amazonaws.com/dev

and then we will retrieve the post we just created.

Note: Replace post-id by the id returned when you created the post. if you type *, all posts will be returned.

curl — request GET https://<your-id>.execute-api.us-east-1.amazonaws.com/dev?postId=<post-id>

A URL will be returned in response to the above GET request, this URL contains the mp3 file that you just created for that post.

7. (Optional) Deploying the Frontend for our Application

As part of the resources that you have downloaded, you may have seen three files:

These are the files that we will need for the frontend. We will start by creating a frontend folder in the root of the project and another folder inside frontend called public. Copy the above files in the public folder.

Then, modify the URL of the scripts.js file with the URL of your API Gateway endpoint, which was returned when you created the CloudFormation stack. For example:

var API_ENDPOINT = “https://<your-id>.execute-api.us-east-1.amazonaws.com/dev”

Create another serverless.yml inside the frontend folder and paste the following code:

Note: don’t forget to update service parameter with your service name

The above configuration will create an S3 bucket and make it READ public.

Finally, in order to be able to deploy our static site through the Serverless Framework will need to install a Serverless Framework Plugin:

npm install — save serverless-finch

After successfully installing Serverless Finch we only need to go to the frontend folder and run

sls client deploy

The URL of our new static site will be displayed.

8. Cleaning up resources

You may now want to delete all the resources you have created.

We will start by deleting the frontend. Go to the frontend folder and run the following command:

sls client remove

Then we will proceed with the deletion of all media objects inside the media S3 bucket, its name will be something similar to <your-service-name>-media-dev. This deletion needs to be done manually in the AWS console.

Now that we have deleted all the objects in that bucket we will be able to delete our entire stack without any errors. Move back to the root of the folder if you are still in frontend

and run:

sls remove

Congratulations!

You have spun up a Serverless application backend with (possibly) a frontend and you have removed the entire application in just two commands. I hope you have enjoyed it.

Big thanks to our awesome The Agile Monkeys team that helped reviewing this article.

If you have any question, do not hesitate to raise an issue in the resources repository or ping us.

--

--

Tai Nguyen Bui
The Agile Monkeys’ Journey

Software Engineer @theagilemonkeys and passionate about Tech and Motorbikes