Serverless: A Painless AWS Boilerplate

Presenting a boilerplate for AWS lambda deployment with enabled offline testing

Anuradha Wickramarachchi
Towards Data Science

--

AWS Lambda

Serverless development has gained its attention due to ease of deployment. Further, AWS lambda has become a popular choice given its ability to integrate other services with AWS SDK. However, starting small-minded might leave you with a huge refactoring as you scale your API unless you are scale proof. So here, I will aggregate my experience from a startup and share a great boilerplate that will make your development fast and scalable by taking a simple REST API as a starting point. I assume you know a bit about how AWS deployments work and Cloudformation.

Points to Remember

  1. Cloudformation templates will only allow 200 resources per deployment. (docs). Therefore, service deployment should be done in a much fine-grained manner.
  2. Each deployed service will have multiple resources and independent access to other services.
  3. If we have separate services, we can bundle separately making bundle sizes smaller.

In this article, let’s focus on the 1st limitation. The only workaround available for this limitation is having segregated services that focus on only one task (which indeed is the motive of having microservices.). Let us work out the boilerplate using serverless framework (getting started guide: here). Also, you need the serverless offline plugin (get it: here).

Structuring the serverless configuration

In summary, we will be having separate folders for each service and a service.yml for each folder. In the serverless framework, we can provide any option using --option VALUE when we deploy or run the offline plugin. We will be exploiting this facility in order to pick individual services for deployment or offline running. We will load each service using the --service option and stage using the--stage option in the command line.

In this boilerplate, I will organize my folder structure as below.

- serverless.yml (main serverless config file)
- prod.env.yml (configs for the prod environment)
- dev.env.yml (configs for the dev environment)
- offline-serverless.js (offline runner for testing)
- package.json (node modules neede, for nodejs)
// Folders containing the application logic- first-service
-- service.yml
-- main.js
- second-service
-- service.yml
-- main.js
// Utils for routing
- utils
-- lambda-router.js
-- errors.js (Error message, not essential)
-- db-util.js (Managing the database connection)

Content of serverless.yml

This is the heart of our service structuring. In YML, we can structure the notation so that parts of the file will be populated in the run time. In our case, pick the service-related content from each service’s service.yml file.

Our serverless.yml would look as below.

Serverless.yml

Here the package will carry folders to include and functions will carry the functions inside each service. These will be loaded from each service’s folder’s service.yml file.

An example deployment script command:

serverless deploy --stage dev --service service1

Offline Configuration For Testing

An example offline running command would look like this:

serverless offline start --stage dev --service service1 --port 3000

However, for testing we will have to run each service on a different port. Following is a script to make our task easy.

serverless-offline.js

Here PATH1 and PATH2 are base paths (For example; users/ for User Service and posts/ for posts service). This is not relevant inside the service, therefore, note that I have removed the base path; in line 24. Each service is specialized in one thing, so having a separate base path is redundant (yet we shall have that in final API deployment).

We can simply run all our services offline for testing using;

node offline-serverless.js

Content Inside Each Service

Each service shall contain the desired resources. In this example, we will place the REST API endpoints. This code will look like this.

service.yml

Note that here we include the node_modules utils and files carrying the logic.

Deployment

Deployment would take place just like a normal serverless-framework deployment. However, there are few things worth noting in the API gateway.

View of an Example Deployment with Multiple Services

As discussed before, although our application has endpoints such as users/profile-detail, our Users service will accept only profile-details since its sole purpose is handling Users. However, we need the API to know that users/ requests must be fed to Users service lambda. This is how we do that.

Go to — APIs and Then Custom Domain Names. You’ll see the following view.

Custom Domain Names

Here you can click Edit and add custom mappings. For example, in the above setup, I have added them as follows.

Mapping for base path, service and stage

For this API, there is only one mapping for the production environment. Should you have several environments that you test online, you have to set up like this. Here, I have purchased a domain and have linked the subdomain for my API. However, you can use a domainless API, but then you will have to rely on AWS random generated URLs for each service (not neat!).

Winding-up

  • Paths are recorded relative to serverless.yml so be careful when you load external files. Always use __dirname + ‘/file.extension’ for loading files.
  • I have made a repo for this boilerplate including my lambda router and DB handler. Have a look, star, fork, improve and send me a PR.
    https://github.com/anuradhawick/aws-lambda-serverless-boilerplate
  • Have a look at the lambda router which is a pretty simple and supports LAMBDA_PROXY integrations (see it: here). You can see how I’ve used it here.
  • If you are planning to add more services to the serverless-offline.js pick different ports along with their base paths. Update the services array in line 4.

I believe this article will help developers who work hard day in and out.
Thanks for reading! Cheers!

--

--