10 Minutes to Deploying a Deep Learning Model on Google Cloud Platform

How to deploy a Deep Learning model to GCP, entirely for free, forever

Binh Phan
Towards Data Science

--

deploy a dandelion and grass image classifier onto the web, through Google Cloud Platform! Source: Pixabay

So you’ve trained a Machine Learning model that you’re ecstatic about, and now, you want to share it with the world. And so you built a web app to serve the model, only to find out that you don’t know how to host this web app on the Internet 24/7. After all, if no one can see your ML model in production, does it really exist? This tutorial came out of the need to share an easy and free way to deploy a deep learning model to production on Google Cloud Platform using its always-free compute service, the f1-micro. As an example, we’ll be deploying a dandelion and grass classifier built using the FastAI deep learning library. I hope that you will be able to deploy your trained model to the world in under 30 minutes, seamlessly and effortlessly.

Requirements: You will need just the computer you have now, and a Google account!

This tutorial will be broken down into 4 steps:

  1. Sign in to Google Cloud and create an f1-micro instance on Compute Engine
  2. Pull the trained model from Github
  3. Add swap memory
  4. Serve model onto the web with Starlette
  5. Build the web app in a Docker container
  6. Run Docker container

1. Sign in to Google Cloud and Create a f1-micro Instance

Signing up for Google Cloud Platform is free

If you haven’t already, sign up for Google Cloud Platform through your Google account. You’ll have to enter your credit card, but you won’t be charged anything upon signing up. You’ll also get $300 worth of free credits that last for 12 months! You’ll be utilizing GCP’s free tier, so you won’t have to pay to follow this tutorial.

Once you’re in the console, go to Compute Engine and create an instance. You’ll need to:

  1. name the instance greenr
  2. set the compute instance as f1-micro
  3. set the OS to Ubuntu 16.04
  4. ramp up the HDD memory to 25GB
  5. Allow cloud access to APIs and HTTP/HTTPS traffic
how to create a VM instance

When your instance has been created and is running, SSH into your instance by clicking on the SSH button located on the right side of the screen.

2. Pull the Trained Model from Github

First, let’s grab the exported model that we already trained on from Github. If you’re interested in learning how to train a dandelion and grass image classifier using FastAI, follow this notebook hosted on Kaggle! I recommend using the library and following the FastAI course if you’re interested in deep learning.

Clone this repository from the greenr repo on Github containing the exported model, export.pkl:

git clone https://github.com/btphan95/greenr-tutorial

3. Add Swap Memory to Our Compute Instance

This is where it gets a bit hacky. Our f1-micro instance only supports up to 0.6GB of RAM, meaning it’s ultra-weak and not capable of installing all of our required deep learning libraries, which are upwards of 750MB in size. We’re going to add swap memory to our little friend to utilize its existing HDD space as RAM to make this all work. Fortunately, I put all of this part into a script, so just run swap.shfrom our greenr repo to add 4GB of swap memory to our machine:

cd greenr-tutorial
sudo bash swap.sh

Note that if you’re using a stronger VM instance, then you won’t have to follow this step

4. Serve Model onto the Web with Starlette

Now, we’re going to build a Python script that will serve our model for inference on the web using the Starlette ASGI web framework. Why Starlette and not Flask? Both are web frameworks written in Python, but Starlette, along with Uvicorn, is way faster than Flask and more scalable in production.

Using your favorite text editor, copy the following code to a python script called app.py in the greenr-tutorial directory.

This will create a Starlette server on port 8008 with a web page where a user can upload an image and get results (is it a dandelion or grass?)

Before moving on, we’re also going to add a file called requirements.txt, which will allow Docker to install all of our required libraries when building a container. Vim and copy the following text intorequirements.txt in the greenr-tutorial folder:

5. Containerize App Using Docker

Using Docker lets us build compact containerized environments with only the libraries and data that we need.

First, we’re going to use Docker to build a container where our app will live. This lets us run the app in its own environment, anywhere.

First, install Docker:

uninstall old versions:

sudo apt-get remove docker docker-engine docker.io containerd runc

set up the repository:

sudo apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common

add Docker’s official GPG key:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

add the stable repository:

sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"

install Docker engine:

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

verify installation by running the hello-world image (it should print an informational message verifying that installation was a success):

sudo docker run hello-world

Now, in the greenr-tutorial directory, you’ll need to create a Dockerfile that gives Docker instructions to build a container. First, vim intoDockerfile and add the following lines:

This Dockerfile will install the required libraries in a Python3.6 environment, add the necessary files to the container, and run the Starlette server in app.py.

In the greenr directory, build the Docker container with the following command:

sudo docker image build -t app:latest .

6. Run Docker Container

Now, all we have to do is run our Docker container!

sudo docker run -d -p 80:8008 app:latest

Now, let’s visit the External IP address of our machine, which you can find on Compute Engine. Make sure when you enter it in your browser, to format it like this (for my instance): http://34.68.160.231/

the final deployed model on the web!
the final result!

If you see the above, then you made it to the end. Congrats!🎉 Now, you can keep running your machine forever, because it is part of Google’s always-free tier.

--

--