The world’s leading publication for data science, AI, and ML professionals.

Machine Learning in production: Keras, Flask, Docker and Heroku

Pipeline for ML/DL solutions: Build the model, create an API to interact with it, containerize it and deploy it.

In this tutorial we will try to walk together through all the building blocks of a Machine/Deep Learning project in production, i.e. a model that people can actually interact with.

Broadly speaking, we’ll create a web interface in which a user could upload an image, then a bit of Deep Learning magic comes into play, and Bingo! 🎉 we get a text revealing what does that image represent.

I KNOW, it’s not rocket science, it’s just image recognition, I haven’t reinvented the wheel. In fact I’m going to be even lazier and .. guess what ! .. I will be using an already trained model 😜

Keep in mind that the idea behind this tutorial is not to teach you Deep Leaning but rather to explore the pipeline of DL in production. What really matters is when we’ll be creating an API to interact with our model, "Dockerizing" it and deploying it.

The codes used in this tutorial are available on my GitHub [here].

GO ! GO ! GO !


1. Build Model

As I’ve said before, we’re going to use a pretrained (and effective) Convolutional Neural Network model for image classification : VGG-19.

You can download a version of this model trained on more than a million images from the ImageNet database. The pretrained network can classify images into 1000 object categories, such as keyboard, car, orange, and many animals.

If you have a specific task of -say- classifying images in healthcare or in a factory, you can use this model as a starting point of a bigger model. In this case, the weights of the pretrained can be frozen so that they are not updated during training. This technique is called Transfer Learning. For the sake of simplicity, we are just using VGG19 as it is.

Let’s start coding .. 🤓

First, we have to import Tensorflow, Keras, as well as some functions to pre-process input images. Note that the default input size for this model is 224×224.

You can download VGG19 model from this [link] and then use tf.keras.models.load_model() to load it.

The following are two functions for preparing the input images and predicting the right class based on the pretrained model.

Now, you can even test your model by adding these lines to the bottom of your file (⚠️ pay attention to the path)


2. Create Flask API

Now, let’s create an API to interact with this model. To do so we will use Flask: a micro web framework written in Python, it provides functionalities for building web applications, managing HTTP requests, rendering templates and so on.

We are also using Flask-Uploads which allows your application to flexibly and efficiently handle file uploading and serving the uploaded files.

Some set ups :

⚠️ Do not forget that before importing any library, first #pip install it !

⚠️ If you encounter some dependency problems when using Flask-Uploads installation, try to #pip install Flask-Reuploaded instead.

Note that we imported the two functions we created in the first step (they are located in another file named model.py)

Now let’s get down to business ..

At this stage, we need a new /upload route; but before going any further we first need a web page where the user will upload his image for us to process it.

Now that you are dazzled by my exceptional gift and expertise in web design 😅 , let’s go back to our upload.py file.

So on the route /upload we expect to receive an HTTP POST request with an attached image, we’ll save this image in the path we specified before and we’ll use the two functions we’ve defined earlier in order to recognize the content of the image. Finally we render our prediction back to the user.

You can test your application by running the file using #python upload.py then go to the browser and type http://localhost:5000/upload .. and enjoy !

⚠️ Make sure your files’ locations correspond to the paths you use. You can check my project structure in my GitHub Repository


3. Containerize : Docker

Your application is now up and running on your machine, and you reached the stage where you want to be able to distribute it.

However, just because the code works well on a specific machine doesn’t mean it’s going to work on other machines. So, it would be helpful if we could create an environment that contains our code as well as all the dependencies it requires for running, regardless of the host specs. In Docker’s language, we call this container .. well .. "a container" 😊

Okay, let’s be clear, this isn’t an advanced docker tutorial, but it contains basic and crucial things that took me long time to understand when I just started working with Docker.

In short, Docker allows us to create reproducible environments. So if you’re moving your application to a Cloud resource -and generally you will- then you can easily and surely deploy it without worrying about the dependencies, versions or recipient system.

Let’s go back to work ..

  1. The first thing to do, obviously, is to download and install Docker, don’t worry it’s pretty straightforward
  2. Create a file requirements.txt in your main directory and fill it with the packages that we have installed for this project:
Flask==1.1.2
Flask-Reuploaded==0.3.2
tensorflow==2.3.1
Keras==2.4.3
Keras-Preprocessing==1.1.2
  1. Create a Dockerfile (without extension) which contains the instructions for building your Docker image.

For further information about the commands of Dockerfile, check the documentation

  1. In a terminal, run the following command to build the Docker image:

#docker build -f Dockerfile -t recog_container:api .

  1. Run container in background and print container ID using:

#docker run -p 5000:5000 -d recog_container:api

Once this is running, you should be able to view your app running in your browser at [http://localhost:5000/upload](http://localhost:5000/upload)

In case you need to install more libraries in this container, just run #docker ps and get the CONTAINER ID of your container.

Next, connect to this container using _#docker exec -it <CONTAINER_ID> bash_

Once connected, you can install whatever you want, for instance #pip install pillow and exit through #exit command

If you’re interested, you can find lots of others Docker commands [in this link]


4. Deploy : Heroku

Thanks to Heroku we will be able to deploy our application in the Cloud, thus everyone can be able to use it and recognize what’s in their images 😎

  1. Create a new Heroku account if you don’t have one. Then download Heroku Command Line Interface (CLI) which makes it easy to create and manage your Heroku apps directly from the terminal.
  2. Login to your Heroku account using #heroku login
  3. Log in to Container Registry: #heroku container:login
  4. Create a new Heroku app: #heroku create <app-name>
  5. Build the image based on your Dockefile and push it to this particular app in Heroku #heroku container:push web --app <app-name>
  6. You can finally open up your Heroku application through the command #heroku open --app <app-name>

Going beyond a Machine Learning model

In this tutorial, we discovered that there are a lot of steps that have to be taken before an ML/DL model can be used by the customers.

The first step is to actually build your model 😅 , and we have seen that there are lots of pretrained models that can be used as a starting point for your project. The second step was about building a Flask API in order to be able to interact with your ML-based backend. Then, we used Docker to package up this application with all of the requirements it depends on. Finally, we made the app available to everyone thanks to the Cloud Platform Heroku.

Link to GitHub repository [HERE]

Link to Twitter account [HERE]


Related Articles