Deploy Nodejs microservices to a Docker Swarm Cluster [Docker from zero to hero]

Cristian Ramirez
Towards Data Science
19 min readFeb 20, 2017

--

This is the 🖐🏽 fifth article from the series “Build a NodeJS cinema microservice”. This series of articles demonstrates how to design, build, and deploy microservices with expressjs using ES6, ¿ES7 …8?, connected to a MongoDB Replica Set, and deploying it into a docker containers to simulate the power of a cloud environment.

## A quick recap from our previous chapters

The first article introduces the Microservices Architecture pattern and discusses the benefits and drawbacks of using microservices. The second article we talked about microservices security with the HTTP/2 protocol and we saw how to implement it. The third article in the series we describe different aspects of communication within a microservices architecture, and we explain about design patterns in NodeJS like Dependency Injection, Inversion of Control and SOLID principles. In the fourth article we talked about what is an API Gateway, we see what is a network proxy and an ES6 Proxy Object. We have developed 5 services and we have dockerize them, also we have made a lot of type of testing, because we are curious and good developers.

if you haven’t read the previous chapters, you’re missing some great stuff 👍🏽, so i will put the links below, so you can give it a look 👀.

We have been developing, coding, programming, 👩🏻‍💻👨🏻‍💻 in the last episodes, we were very excited in how to create a nodejs microservice, that we haven’t had time to talk about the architecture of our system, we haven’t talked about something we have been using since chapter 1, and that is Docker.

Well the moment has come folks to see the power 💪🏽 of containerization 🗄. If you are looking to get your hands 🙌🏽 dirty and learn all the fuss about Docker, then grab your seat, put on your seat belt, because our journey is going to get amusing.

## Current cinema microservice architecture

Until know we have created 3 docker-machines , we created a mongo database replica set cluster where we put a replica in each docker-machine, then we have created our microservices, but we have been creating and deploying it manually only in the manager1 docker-machine, so we have been wasting computing resources in the other two docker-machines, but we are not longer doing that, because we are going to start to configure Docker, in the correct way and from scratch :D.

What we are going to use for this article:

  • NodeJS version 7.5.0 (for our microservices)
  • MongoDB 3.4.2 (for our database)
  • Docker for Mac 1.13.0 or equivalent (installed, version 1.13.1 incompatible with dockerode)

Prerequisites to following up the article:

  • Basic knowledge in bash scripting.
  • Have completed the examples from the last chapter.

If you haven’t, i have uploaded a github repository, so you can be up to date, at the repo link at the branch step-5.

# What is Docker ?

Docker is an open source project to pack, ship and run any application as a lightweight container.

Docker containers are both hardware-agnostic and platform-agnostic. This means they can run anywhere, from your laptop to the largest cloud compute instance and everything in between — and they don’t require you to use a particular language, framework or packaging system. That makes them great building blocks for deploying and scaling web apps, databases, and backend services without depending on a particular stack or provider. — @Docker

Docker structure

What exactly is docker in other words ?

So in other words, docker let us build templates (if we can call it templates, this templates are docker images) for creating containers that holds a virtual machine (if we can call it virtual machine, because is they’re not) where we can include our application and install all of its dependencies, and run it as an isolated process, sharing the kernel with other containers in the host operating system or in any IaaS platform.

Virtual machine structure

What is the difference with virtual machines ?

Virtual machines includes the application, the necessary binaries, libraries and an entire guest operating system, which can amount to tens of GB — @Docker

# What is Docker-Machine ?

Docker Machine let us create Docker hosts on our computers, on cloud providers, and inside data centers. It creates servers, installs Docker on them, then configures the Docker client to talk to them. — @Docker

Docker Machine is a tool that let us install Docker Engine on virtual hosts, and manage the hosts with docker-machine commands. You can use Machine to create Docker hosts on your local Mac or Windows box, on your company network, in data centers, or on cloud providers like AWS or Digital Ocean. Docker Engine runs natively on Linux systems. If you have a Linux box as your primary system, and want to run docker commands, all you need to do is download and install Docker Engine.

But why do we need Docker-Machine even on Linux ?

We need it if we want to manage efficiently our Docker host on a network, in the cloud or even locally, and because if we make tests locally, Docker-Machine can help us to simulate a cloud environment.

# What is Docker Swarm ?

Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual host using an API proxy system.

Lets understand what a CLUSTER mean first.

A cluster is a set of tightly coupled computers that function like a single machine. The individual machines, called nodes, communicate with each other over a very fast network, and they’re physically very close together, perhaps in the same cabinet. Usually they have identical or nearly identical hardware and software. All the nodes can handle the same types of request. Sometimes one node takes requests and dispatches them to the others. — Phil Dougherty

Ok so now let’s see what we can accomplish creating a Docker Swarm Cluster:

  • Multi-host networking
  • Scalling
  • Load Balancing
  • Security by default
  • Cluster management

and many more things, so now that we have gain some vocabulary and knowledge about the Docker Ecosystem, its time to see how we can create our architecture and deploy our cinema microservice to a Docker Swarm Cluster, that handles hundreds or thousands or millions of user requests, like booking 🎟, shopping 🕹, watching 🎥, or whatever action that a users 👥 does in our cinema system.

When you run Docker without using swarm mode, you execute container commands. When you run the Docker in swarm mode, you orchestrate services. You can run swarm services and standalone containers on the same Docker instances. — @Docker

# Building the architecture

STEP 1: Create and init the docker swarm cluster

We are going to start builiding our cinema microservice architecture from scratch, so let’s begin with talking in how to create our docker machines and how to init a docker swarm cluster.

First we need to create a new folder in the root of our cinemas project called _docker_setup before we start creating and modifying files, so our project needs to look like the following:

. 
|-- _docker_setup
| `-- ... here we are going to put our bash files
|-- api-gateway
| `-- ... more files
|-- booking-service
| `-- ... more files
|-- cinema-catalog-service
| `-- ... more files
|-- movies-service
| `-- ... more files
|-- notification-service
| `-- ... more files
|-- payment-service
| `-- ... more files

So let’s create a bash file in our _docker_setup folder called setup-swarm.sh this will create and setup our docker swarm cluster. Why a bash file, because a good developer can automate so much things like this:

Ok so it’s a little bit lengthy script, but i want to put it all, because it give us a better picture on what is going on and how we can accomplish the docker swarm cluster set-up automation.

So let’s split the script and talk about what is happing there.

# So first the basics, how do we create a docker-machine.$ docker-machine create -d $DRIVER $ADDITIONAL_PARAMS $MACHINE_NAME

This is basically how we can create the machines it doesn’t care if they will be a manager node or a worker node, it is the same command for both. The driver could be virtualbox, parallels, digital-ocean, AWS, etc.

Once we create our machines, its time to init the swarm cluster, like the following:

$ docker-machine ssh manager1 docker swarm init --advertise-addr $(getIP manager1)

Its a simple docker command, easy as that, docker swarm init, initialize the swarm manager configuration. We can do this in two ways, setting our environment like the following eval `docker-machine env manager1` and execute the docker swarm init command or we can ssh to the machine like the following docker-machine ssh manager1 {and here we pass the docker commands}, to init the swarm we need the docker-machine ip the ip could vary by the driver provider, so to get the ip, i made a bash function that will retrieve the ip of a given docker-machine.

Once the swarm manager has been initiated, we are ready to add the worker nodes, and to do that we call the function join_node_manager that does the following:

$ docker-machine ssh {name of worker} docker swarm join --token $(get_worker_token) $(getIP manager1):2377

Again we are going to loop this command with the given number of workers variable in our script, and what this command is doing is first calling the function get_worker_token and this will get the token from the manager node, this is the token need it to register the worker node to the swarm cluster, next it will call again the function that retrieves the ip of a give docker-machine, and will complete the docker swarm cluster configuration, and we are ready to go, to deploy our cinema microservices.

To take advantage of swarm mode’s fault-tolerance features, Docker recommends you to implement an odd number of nodes according to your organization’s high-availability requirements. — @Docker

STEP 2: Create docker images

Containers are the best way to deploy Node.js applications to production. Containers provide a wide variety of benefits, from having the same environment in production and development to streamlining deploys for speed and size. — Tierney Cyren

Currently we have 5 microservices and 1 api-gateway service, which we can run on a dev box, as long as it has a compatible version of Node.js installed. What we’d like to do is to create a Docker Service (we will see what is this later on the chapter) but for that we need to create a Docker Image for every microservice that we have.

Once created the Docker image we are able to deploy our service anywhere which supports docker.

The way we create a Docker image is by creating first a Dockerfile(the template). A Dockerfile is a recipe that tells the Docker engine how to build our images.

As we are developing only nodejs apps we can use the same Dockerfile specifications in all our microservices in our project, but are we doing everything to make the process as reliable and vigorous as possible?

So this is the next file we are going to review, and also we are going to apply good practices on it.

Now we can modify all our microservices Dockerfiles, that we have created before with this specifications.

Let’s talk a little bit of what is going on and if this process is reliable.

By default, the applications process inside a Docker container runs as a root user. This can pose a potentially serious security risk when running in production. A simple solution to this problem is to create a new user inside of a Docker image and use that user to execute our application. The first process inside of a Docker container will be PID 1. The Linux kernel gives PID 1 special treatment, and many applications were not designed to handle the extra responsibilities that come with being PID 1. When running Node.js as PID 1, there will be several manifestations of the process failing to handle those responsibilities, the most painful of which is the process ignoring SIGTERM commands. dumb-init was designed to be a super simple process that handles the responsibilities of running as PID 1 for whatever process it is told to start.

STEP 3: Build and Run our docker image

To build our Docker Images, we need to run the following command:

# This command will create a new docker image$ docker build -t movies-service .

Let’s look at the build command.

  1. docker build tell the engine we want to create a new image.
  2. -t flag tag this image with the tag movies-service. We can refer to this image by tag from now on.
  3. . use the current directory to find the Dockerfile.

Now we are ready to run a container with our new docker image, and to do that we need the following command:

$ docker run --name movies-service -l=apiRoute='/movies' -p 3000:3000 -d movies-service

Let’s look at the run command.

  1. docker run tell the engine we want to start a new container.
  2. --name flag sets a name to the container. We can refer to this container by the name from now on.
  3. -l flag sets metadata to the container.
  4. -p flag sets the port binding from {host}:{container}
  5. -d flag run the container in detached mode, this maintains the container running in the background.

So now let’s create a bash file in our _docker_setup foleder called create-images.sh to automate the creation of all our microservices docker images.

Before we execute this script, we need to modify our start-service.sh that we have on each microservice and rename it to create-image.sh, so it can look like the following:

#!/usr/bin/env bashdocker rm -f {name of the service}docker rmi {name of the service}docker image prunedocker volume prune# the previous commands are for removing existing containers, 
# and image, then we clean our environment and finally
# we creat or re-build our image
docker build -t {name of the service} .

STEP 4: Publish our docker image to docker hub repository

Publish our docker image ¿ 🤔 ? , Docker maintains a vast repository of images, called the Docker Hub, which you can use as starting points or as free storage for our own images. This is where we are pulling our node:7.5.0-alpine image to create our microservices images. ¿ But why do we need to publish our images ?

Well because, later on the chapter we are going to create Docker Services, and this services will be deploy and replicate all over our Docker Swarm Cluster, and to start our services, the cluster nodes needs the image of the servcice to start the containers and if the image is not present locally it will search it in the docker hub and then will pull it to the host to have the image locally and be able to stat our services.

But first we need to create an account at the Docker Hub website, then we need to login in our shell doing the following command:

$ docker login          
Username: *****
Password: *****
Login Succeeded

Next we need to tag our images, to be able to reference them by name and push them to the docker hub.

docker tagging images structure

Now that we have logged in, and we now the structure on how to tag our docker images, its time to modify our create-images.sh, so it can create our images, tag our images and push our images to the docker hub, all automatically for us, our bash file needs to look like the following.

STEP 5: Setup our mongodb replica set cluster

This is a little bit of topic to docker steps, but we need a database to persist our data that our microservices uses, and in this step i won’t spent time to describe how to do it, i have already wrote an article on how to deploy a mongodb replica set cluster to docker, and i highly suggest you, to give it a look to my article if you haven’t.

Or skip this step and wait until you read step-7 .
(and make some thug life 😎)

STEP 6: Create docker services in our docker swarm cluster

Doing step 5 is highly important because if we haven’t our database up and running our docker services wont start correctly.

So now let’s create another file in our _docker_setup folder called start-services.sh this bash script will start all our microservices in the type of Docker Services, so that this services can scale up or scale down, as needed.

Now that our services is going to scale up and down, we are getting into trouble to call those services, but there is nothing the be afraid of, my dear readers, because, i said before that one of the benefits of creating a Docker swarm cluster, is that it setup for us behind the scenes a load balancer and docker is the responsible to handle which service to call when is requested. So let’s review our next file.

As you can see this file is very similar to our create-images.sh but this time we are calling the start-service.sh from every service, and now let’s see how is composed the start-service.sh file.

#!/usr/bin/env bashdocker service create --replicas 3 --name {service name} -l=apiRoute='{our api route}' -p {port binding} --env-file env {microservice image}

Let’s look at the the service command:

  • The docker service create command creates the service.
  • The --name flag names the service e.g. movies-service.
  • The --replicas flag specifies the desired state of 3 running instances.
  • The -l flag species that we can attach metadata e.g.apiRoute="/movies".
  • The -p flag specifies the port binding on the host and the container.
  • The -e flag specifies the can set to the service environment variables.

STEP 7: Execute all the automated files in one command

Well well well, its time to do the magic 🔮✨ folks.

To execute all our automated files that we have created, let’s create the final script that will do everything for us, because developers are lazy 🤓.

So we need to create this file at the root of the cinemas microservice project and we are going to call it the kraken.sh 🦑 just for fun and because is very powerful 😎.

Well it doesn’t have fancy programming inside, but what it does, is calling all our automated scripts for us and do our job.

Finally let’s execute it like the following.

$ bash < kraken.sh

As simple as that, we have been doing so much modularity to simplify the process of creating a Docker Swarm Cluster, and start our Docker Services automatically.

“Simple can be harder than complex: You have to work hard to get your thinking clean to make it simple. But it’s worth it in the end because once you get there, you can move mountains.” — Steve Jobs

I have uploaded a video demonstration to see what happens when we execute the kraken.sh i speed it up the video a little bit, but the kraken process may take like 10 min more less depending on the host power to finish.

Finally we can check the status of our docker environment executing the following commands that appears on the image below.

docker — bash output

As you can see some services are already replicated in our cluster, others doesn’t have even started, thats because the machines are pulling our images to start our services, and once our images are downloaded and our services are started we should se something like the following image.

docker service ls — bash output

So now our architecture looks like the following diagram, where all our microservices are created dinamically, replicated through all the cluster, and can be scale as needed, the only services that are created manually are our mongo containers and the api-gateways, why because they are attached to specifications of our host servers, mongo needs a persistent storage and our api needs to discover the host docker services.

Now let’s verify that our services are currently up and running, so we need to execute the commands like the image below.

docker service ps {service} — bash output

And If we run a Docker Swarm visualizer, that are available on github, we can see how our services (containers) are distributed on our Docker swarm cluster, like the following image.

Docker swarm visualizer

And if we scale down some services e.g. docker-machine ssh manager1 docker service scale movies-service=1, we can se how our services are redistributed on our Docker swarm cluster, like the image below.

Docker swarm visualizer

We are almost done with our cinemas microservice configurations, and also with our cinemas microservice system.

STEP 8: Extra configurations

It has been a very large article, but we have learned to much on how docker can fit with our development and also how can docker complement our system to be more robust.

There is a couple of things we still need to do, to get correctly set and ready to run our cinema microservice:

  • we need to update the api-gateway, so that can discover now the docker services that are running to be able to do proxy.
  • we need to populate our database and for that you can go and check the commands at the github repository readme here.

In our API Gateway, we just need to change a few lines in our docker.js instead of calling the listContainers() function, we will call the listServices() function and set our routes with the provided services.

To not make this article more longer, you are welcome to check the complete code changes to the API Gateway, at the github repository, and if you have any question, as always you are welcome to send me a tweet or just put a comment below 😃📝.

STEP 9: Testing our setup

Because our article wouldn’t be complete without making some testing to our system. Tests are fundamentally important because we can look to the system from different angles, with different perspectives and testing it with different expectations, the ultimate purpose of software testing is not to find bugs but to make the product qualitative. As a good testers, we are contributing in improvements of the product quality.

# we need to locate at the api-gateway folder and execute the test
$ npm run int-test

And we will have an output like the following.

Here we are calling our api-gateway services as you can see we are calling to different docker-machine ip and our api-gateway proxies our requests to our Docker Services, and finally our test passes correctly 😎.

Also i have recorded a jmeter stress test to our cinema microservice system deployed in our docker swarm cluster, so you can see what are the expected results.

# Time for a recap

What we have done…

We’ve seen a lot of docker 🐋. We talked about what it is this whale, Docker, Docker-Machine, Docker-Swarm, Docker-Images, Docker-Hub, Docker-Services and how to make their appropriate configurations, how to set it up with an automation process and how does it fit with our cinemas system.

With docker we made our cinemas microservices and system more dynamically, fault tolerant, scalable and secure. But there is so much interesting things to see yet with docker, for example, Docker Network, Docker Compose, Docker plugins, etc… docker is an expanding devops world.

We’ve seen a lot of development in microservices with NodeJS, and we have seen how to implement this microservices with the Docker Ecosystem but there’s a lot more that we can do and learn, this is just a sneak peak for a little bit of more advance docker playground. I hope this has shown some of the interesting and useful things that you can use for Docker and NodeJS in your workflow.

# One more thing …

I know that this article has become very large, but i think that its worth it to see an extra step, that will show us how to monitor our docker swarm cluster.

STEP 9: Monitor the docker swarm cluster

So to start monitoring our cluster we need to execute the following command in our manager1 docker-machine

$ docker-machine ssh manager1 docker run --name rancher --restart=unless-stopped -p 9000:8080 -d rancher/server

Then when the container is ready we need to visit the following url http://192.168.99.100:9000 in our browser.

Then the rancher gui, will give you through the setup, and finally we will be able to monitor our cluster.

Rancher UI of our cluster

And if we click in one of the hosts, we can see something like following.

Rancher — Manager 1 logs

Finally a have a recorded the super duper stress integration test so you can be able to see the results in video the result of all this configurations and how we can monitor our cluster via either graphic interface or console logs 😎 the console log rocks🤘🏽.

So we have seen the super power 💪🏽 that the blue whale 🐋 has to offer to us.

## Final comments

Ok so there is our cinemas microservice system almost complete, there is something missing, our cinemas microservice system wouldn’t be useful if we don’t develop that missing master piece, and that my friends, is the front end service that our final users will be interacting with our system 😎, and that web UI is the missing piece to complete our system.

# Thanks for reading

Thanks for reading! I hope you find value in the article! If you did, punch that Recommend Button, recommend it to a friend, share it or read it again .

In case you have questions on any aspect of this series or need my help in understanding something, feel free to tweet me or leave a comment below.

Let me remember you, this article is part of “Build a NodeJS cinema microservice” series so, next week i will publish another chapter.

So in the meantime, stay tuned, stay curious ✌🏼👨🏻‍💻👩🏻‍💻

¡ Hasta la próxima ! 😁

You can follow me at twitter @cramirez_92
https://twitter.com/cramirez_92

# Complete code at Github

You can check the complete code of the article at the following link.

# Further reading || Reading complementation

--

--