Securing a Web Application with AWS Application Load Balancer

Matthew Bradburn
_Editor's note: while we love serverless at Stackery, there are still some tasks that will require the use of a virtual machine. Need to install an image library on the OS layer? Want to configure ports for some reason? AWS Lambdas don't let you do that. If you're still using an Elastic Compute Cloud (EC2) Virtual Machine, enjoy this very useful tutorial on load balancing_

I was recently called upon to secure an Nginx web server with HTTPS, and my goal was to set this up with a certificate obtained from AWS Certificate Manager. It took me a while to figure out how to get everything configured and working. Hopefully someone else who is attempting to do the same thing will read this and I can save you some time!

It was pretty convenient for me to use AWS Certificate Manager (henceforth "ACM") to generate the server certificate, since my application was already running on an EC2 instance in Amazon's cloud. Interestingly with ACM, you don't ever get to see the certificate keys yourself-- you have to use some AWS service to serve up the certificate. That's what I'm using AWS Application Load Balancer ("ALB") for, even though I have only a single instance at the moment so there's no actual load balancing going on. ALB is willing to send all traffic to a single EC2 instance, it doesn't care.

In my case I set it up so that ALB would be the HTTPS termination; it does all the SSL handshaking and decrypts HTTPS before sending HTTP to my application (and encrypts responses from my application back to the client). So the first thing I did was get my application set up and serving HTTP on an EC2 instance, with the security group and network ACL properly configured.

Obtain a Certificate

Go to the ACM console and request a public certificate for your domain name. You'll have to verify ownership of the domain name; in my case the domain name was set up in AWS Route 53 so I verified ownership by adding a particular CNAME record for the domain-- ACM tells me the contents of the CNAME record to create and then verifies that I was able to do what it instructed.

You can also verify ownership of the domain by having an email sent to the domain owner, with a link to click that verifies you received the email. It takes a few minutes for the domain verification to complete.

Set up ALB

Set up ALB in front of the EC2 instance. This is done in the EC2 console, there's a section in the left-hand column for Load Balancers, selecting that lets you create a new one. I see that either an Application Load Balancer or a Network Load Balancer can serve a certificate for you, but I haven't done any experiments with NLB.

When setting up the ALB, a lot of the settings are pretty obvious, although setting up Listeners is important and maybe a little obscure. A Listener is a process that runs on the ELB to receive the traffic that is to be balanced; so in this case I create just a single listener that listens for HTTPS traffic on port 443 (the default for HTTPS).

The ALB lives in the same VPC as my application EC2 instance, and you have to pick subnets in at least two Availability Zones in order to make the load balancer highly available. Since I have only a single application instance I choose its AZ and subnet for the ALB. Because my ALB is internet-facing, AWS will choose public IP addresses for me.

(Aside from internet-facing load balancers, there are also "internal" load balancers, which might be used to balance traffic among servers in a back tier of an application)

For ALB security settings, of course I choose the certificate that I obtained earlier. You can choose a security policy to determine which encryption algorithms will be allowed for SSL handshaking, I don't know of any reason not to use the default.

You have to set up a security group for your ALB, you'd want one that allows HTTPS traffic into your ALB.

When you configure routing for the load balancer, you have to configure routing options. This is related to the way in which the load balancer interacts with the EC2 instances that the network requests are being sent to; so I create a new target group, and the target type is "instance". The protocol between the load balancer and the instance is HTTP on port 80.

I also configure health checks, which is just an endpoint that the load balancer can use to ping each instance to determine whether it's healthy so traffic won't be sent to dead instances. My application conveniently exposes an endpoint for this purpose, but for other web applications this might be as simple as sending a GET to / and expecting a 200 back.

Then you register your target instances into the target group, and you can start start the creation of your ALB. It takes a few minutes, and after its state becomes "active" you can also look at the target group (select "Target Groups" from the left-hand column) and check whether it thinks your instances are healthy.

The default is for ALB to do a status check every 30 seconds, and it wants to get 5 successful replies to its pings, so it may take a while for an instance to transition from unhealthy to healthy.

Once the load balancer is created, AWS gives it a public DNS name, which is shown near the top of the load balancer configuration. You want some DNS provider to redirect traffic sent to your registered domain name to be sent to the load balancer instead. I'm using Route 53, so I just create an ALIAS record from my-domain.com to the load balancer name as found in the description of the newly-created ALB.

At this point, you're ready to start testing things. Don't feel bad if it doesn't work perfectly for you the first time, it took me some tinkering to get all the settings coordinated. If you do curl -v https://my-domain.com it will log messages about SSL handshaking, which may be helpful.

Web Server Issues

One issue I noticed when I tried to log in to my application was a result of the protocol conversion that's going on-- so when I access my application I have to use HTTPS, but the web server hosting the application (Nginx in my case) knows that it's serving HTTP. So when I navigate to an authenticated endpoint before logging in, Nginx wants to redirect me to the login page, but the redirect refers to HTTP, which is not accepted by the ALB. So I updated /etc/nginx.conf to include a rule like proxy_redirect http://my-domain.com https://my-domain.com to fix this issue.

There may be other issues of this type depending on the application; for example, you can imagine that a blog server application might be able to generate permalinks internally, which would need to be configured to refer to HTTPS rather than HTTP.

Cost

Certificates from ACM are free of charge, which is a good price, although you'll end up using them with an AWS EC2 instance and load balancer.

The cost of the ALB varies depending on the region and the amount of data transferred through it, but in my region it costs just over $16 per month to have the ALB sitting there serving up my certificate. That's a little expensive for personal art projects (see https://www.humanclock.com) but I imagine it's affordable if you have the sort of application that needs the load-balancing feature.

End-to-end Encryption

One caveat is that some applications may be required to provide end-to-end encryption due to some regulatory burden. The architecture I've described, where HTTPS is terminated in ALB, does not provide this, and packets are unencrypted inside my AWS VPC between the ALB and the EC2 instances. It's also possible to set up ALB such that it serves the ACM certificate and then re-encrypts the traffic before sending it to the back-end instances using HTTPS.

Adding a VPC in Stackery is easy

When you add a VPC to your serverless stack with Stackery, all of the configuration is done for you-- including built-in security best practices!

We hope our forays into AWS are helpful- even when they aren't focused on getting AWS serverless resources like VPCs, permissions, and more correctly name-spaced and organized. Cloud on.

Questions about this process or any other Stackery task? Don't miss our weekly livestreams at 10 AM PDT. Our serverless engineers and experts provide a demo of the product and field your questions. It's a great way to get direct guidance from the serverless pioneers who specialize in your specific question. Occasionally, we have guest hosts like James Beswick and Jeremy Daly, too!

View past livestreams and register for the next one here

Related posts

How to Write 200 Lines of YAML in 1 Minute
EngineeringHow to Write 200 Lines of YAML in 1 Minute

© 2022 Stackery. All rights reserved.