Building serverless applications with AWS and Node.js

Introduction

With a simple Google search we can discover that SERVERLESS means literally, without server, this type of serverless architectures allow to create and execute applications and services without having to manage infrastructure.

The benefits of not having to manage the infrastructure are quite a few: cost savings, automatic scalability, focus on application implementation, etc.

There are also some disadvantages: Limitation of the maximum time of execution of a function, latencies, Stateless development, etc.

Case Exposed

Imagine that we have an application that allows uploading images and we are given the functionality to generate thumbnails (size 100×100 and 300×300) of images that are uploaded.

We could solve this functionality with a Serverless development using the platform Amazon AWS and Node.js as development technology.

It is assumed that the reader has programming and Linux operating systems knowledge and it is registered with Amazon AWS with the required permissions (role creation, access to lambdas functions and creation of buckets).

First of all, we will create a Bucket of S3 that will contain at least the folder “images”, in our case, the name of the bucket is “gigigo-test-lambda”

 

 

 

 

 

 

 

To generate the thumbnails, we will use lamba functions (executes the code in response to events and automatically manages the resources). We will use the PUT method of the S3 service to upload our image, said method being our trigger event or TRIGGER of the lambda function, which will aim to store the cuts in the folder thumbnails.

The lamba function could be composed by the following code:

 

 

The function can contain many parameters, one of them would be the “event” where we will receive the information of the event that invokes it, for example the variables (eventRecord.s3.bucket.name or eventRecord.s3.object.key)

The variable crops contain the information of the cuts to be generated with its destination folder and properties such as height and width, etc.

With the “aws-sdk” library and the object “AWS.S3” using the “getObject” method, we will take the image (we know which one it is by the event), then generate the cuts through the “sharp” library and finally we will upload the different cuts by the method of “putObject”

To raise our lambda function we must generate a zip file. Afterwards, we can create the lambda function on a guided way through the AWS interface, uploading the zip file. You can find it here

 

 

 

Furthermore, we must create the trigger that associates S3 (PUT) with our lambda function, by indicating the bucket name, prefix folder and type of event.

 

 

If you have followed all the steps and uploaded a file to the images / folder of the S3 bucket by (PUT), two folders will be created (if they did not exist before) under the folder thumbnails, and inside each of the files with the corresponding thumbnail.

 

 

THERE ARE OTHER WAYS …

Even though, the previous example is perfectly correct, and our function generates the thumbnails responding to the event, it can be improved. Our needs may change and you need to change the cut sizes or create additional cuts by having to modify the lick function. Perhaps, we may be interested in uploading our images in different buckets, without modifying or creating additional triggers.

 

We could solve these problems by invoking the lamba function from a nodejs project, with a code similar to the following one displayed below:

 

 

As you can see the invoked lamba function receives an object with the crops property where the thumbnails to be generated are configured, in addition to the properties with the name of the bucket and the file previously generated.

The code of the lambda function would be as follows:

 

In our parameter “event” we would receive the data passed in the invocation, and converting our lambda function into a more dynamic and flexible one.

You can find the zip file with the lambda function ready to upload here

CLAUDIA.JS

You may be thinking that creating forms in AWS or uploading zip files is not a nice thing to deploy our code. Versioning of these functions can also be tedious.

One of the tools that can solve these problems is claudia

With claudia we can easily deploy our lambda function in AWS.

Given the following repository that contains the code of the last lambda function mentioned above (we should delete it if we have created it in the previous example, since we will create it again), we could deploy our lambda function as follows:

1-Set the aws credentials by editing the ~ / .aws / credentials file

default
aws_access_key_id=YOUR_ACCESS_KEY
aws_secret_access_key=YOUR_SECRET_ACCESS_KEY

 

2-Install Claudia

 

3-We clone the repository that contains the code of the lambda function and install parcels

 

4-We create our lambda function in AWS using claudia.

This way our function would be displayed and ready to use

 

5-In case we need to update our lambda function

 

If it does not work for you, verify that the role generated by claudia has permissions to create resources in AmazonS3.

 

 

 

 

 

 

Claudia API Builder + Amazon Api Gateway + Lambda

In addition of being a deployment tool, claudia contains an “API Builder” library that allows us to easily route our lambda functions, using the Amazon API Gateway, allowing to create web services that use our lambda functions.

1-Clone the following repository, Install packages

 

2-Deploy in AWS

 

3-After doing deploy, an object with the result shows up. You just need to copy the value of the url property

 

4-Test your image cutting service by using curl (replace the url for the value copied in the previous step)

 

 

Conclusions

SERVERLESS architectures can make it easier to fix problems without having to solve possible infrastructure problems.  Furthermore, they can be a good solution for SAAS models where they may not have infrastructure in certain developments.

On the other hand, it seems a little crazy (nowadays) to consider broad developments under this model due to the lack of flexibility (maximum execution time, stateless, latencies).

 

This post would not have been possible without the collaboration of:  Mónica, Edu, Manuel, Paco, Jorge Lucas, Nonide y Juan.

Ruby is rubbish! PHP is phpantastic!