SERVERLESS MACHINE LEARNING, GCP

Solution Architecture and Main Components
Before we continue, let’s take a look at the overall architecture and introduce the main components of this solution.

In this architecture, a machine learning model is turned into a web app using the Streamlit framework. This app is then packaged into a Docker container, which is then deployed on the App Engine to share with the world!
Google App Engine
App Engine is a fully managed serverless application platform product from Google, which allows you to easily deploy your web app. Deploying an app is as simple as running the command gcloud app deploy
.
Since it’s serverless, you don’t have to worry about managing servers or any infrastructure. It scales automatically in response to the number of requests the app receives. Therefore, when there is no traffic, it scales down to zero. And you only pay for what you use!
In this case, we are going to use this serverless product offering from Google Cloud Platform (GCP) to serve our machine learning model as an app for others to interact with.
Machine Learning Model
For the purposes of this article, I have generated a machine learning model for the famous iris flowers dataset classification problem (available here). This problem uses four columns of measurements (sepal length, sepal width, petal length, petal width) of the flowers in centimeters to predict the flower species.

To learn more about this problem in a tutorial fashion, I recommend the following article here.
Streamlit
Streamlit is a minimalistic framework to turn regular python scripts into beautiful web apps. As a data scientist, you usually will have to write a lot of additional code in a web framework, such as Flask, to share the machine learning model as an app. You will also have to know a bit about front-end development to make the app beautiful. That’s where Streamlit comes in.

With Streamlit, by adding a few lines of code to the usual Data Science python scripts, you can convert the whole script to a web app in minutes. And it looks beautiful! 😍

In this solution, we will wrap the iris species prediction model file in the Streamlit framework to interact with it as a web app.
Docker Container
Imagine you built an app and shared all the project files to run that app on another machine. If you are lucky it will run as-is. But the majority of the time, you will have to do some debugging to make the app work. Usually, it’s to do with missing libraries or incompatible library versions, or missing files.
Simply put, the environment is just not the same as the machine where the app was tested and worked without issues. To solve this problem, containers were created.

A container is a way to package an app and all its dependencies (code, libraries, files, environment configurations, etc.) into an isolated environment so that the app can run on different computing environments reliably. Docker is one of the tool providers to build containers. And Docker containers run using the Docker Engine.
We will use Docker to containerize our app and run it on Google App Engine.

There are a few key steps that we need to know in order to containerize an app. And it all starts with a Dockerfile.
- Dockerfile: This is a file with a set of instructions to build a Docker container image. Think of this file as a blueprint for a docker container image. It starts by specifying the base operating system of the container, followed by instructions to install the dependencies and set up the environment for the app. Finally, it ends with the command to run once the container (i.e. isolated environment) is created and running.
- Docker Image: This is an executable package that gets built from the instructions in the Dockerfile. It includes everything that is required to run the app in an isolated container environment, as specified in the Dockerfile. It can be viewed as a template from which a Docker container is instantiated.
- Docker Container: It’s an instance of the Docker image, running on the Docker engine.
To learn more about Docker, I highly recommend the article here.
Housekeeping
Before we begin the fun part, here is a list of important prerequisites:
- This article assumes you already have a GCP account. If you don’t have one, signup here which comes with some free credits.
- To interact with your GCP account from your local machine, install the Google Cloud SDK using the steps outlined here.
- Make sure you have Docker installed on your local machine. If you don’t have it, download it from here.
- _All the code in this article was developed in Python 3.8 on Windows. The codes and necessary files for this article can be found in the GitHub repository here._
Without further ado, let the fun begin! 😀
Step 0: Set up Virtual Environment on Local Machine (Considered Best Practice)
If you don’t already do this, make sure to use virtual environments for your projects. A virtual environment helps to isolate a project in a self-contained environment, encapsulating all its dependencies and avoids dependency conflicts with other projects on the same system.
It captures all the information required to recreate the project environment (list of dependencies & their versions), which can be shared in a file (e.g. requirements.txt).
The popular virtual environment managers for python are venv, virtualenv, pipenv, and conda. This article uses conda, which comes packaged with the Anaconda data science toolkit distribution, and I highly recommend it.
On your local machine, create a new virtual environment for your project using conda create --name **<env_name>
** in terminal.

Once created you can activate the environment using conda activate **<env_name>
** . Notice that the name of the activated environment shows up on the left, before the path of the current working directory.

To install Python packages for a project in this isolated environment, you can use the commonly used pip package (i.e. using the command pip install
). But first, make sure to install the pip package in the activated virtual environment by running conda install pip
.
You can also download a cheat sheet of all the conda commands here. 😃
Important Side Note
If you have installed conda using the default recommended settings for Anaconda installation, your terminal will not recognize the command conda
. This is because in the recommended installation settings, Anaconda3 is not added to your PATH environment variable.
Instead, search for Anaconda3 prompt in your start menu and use that to ensure conda
is recognized as a command.

_You can also activate the anaconda environment in your normal terminal, by running the activation script `/Scripts/activate` ._

Step 1: Download & Run Example Streamlit App
Create a folder for the project and download the code files for this article from the repository here.
Then navigate to this directory using terminal (cd **<path_to_dir>
) and make sure that the virtual environment is active (`conda activate `**).

Obviously, you can do the same using your favorite IDE. But make sure to activate a virtual environment (click here for VS Code). Otherwise, you will end up installing dependencies in your default environment, which could break other projects using that environment.
Now let’s take a look at the Streamlit app file (app.py).
Notice how by adding a simple import (import streamlit as st
), a regular data science script (with pandas, numpy, model.predict(), etc.
), gets converted in a Streamlit web app. All we have done is add Streamlit widgets to interact with the model, such as a text input widget, button widget, etc.
You can try running the example Streamlit app in the newly created virtual environment using streamlit run app.py
.
This should result in errors due to missing python modules in the virtual environment. You can use pip install
to install the missing modules one by one, as you encounter them until the app finally runs.

Or, you can install all the dependencies of the app, by using the dependency list in requirements.txt, which will replicate my environment in which the app was created and tested.
Install all the project dependencies into the virtual environment using the command pip install -r requirements.txt
.

You can create a dependency list like this for your own project, once completed, by running pip freeze > requirements.txt
.
Give some time for the installation of the modules to complete. When it’s done, you can run the Streamlit app in the virtual environment using thestreamlit run app.py
command.


You should see your default browser pop up and display the app (on localhost, usually port 8051 by default). Feel free to play around with the numbers and see the model work.
Congratulations, you have built a web app in minutes to interact with a Machine Learning model! 😄
When you are done playing with the Streamlit app, you can shut the app server down using Ctrl + C in the terminal.
Step 2: Dockerize App and Test Locally
We will now begin to dockerize this app, to run as a container on any machine with a Docker engine. To do this we need to first build a Docker image from the instructions in the Dockerfile in the current directory. These instructions dictate how to build the container.
Dockerfile Overview
The instructions in the Dockerfile are pretty straight forward.
FROM python:3.8
: We start off by setting the base operating system image as a Linux OS with Python 3.8 installed.WORKDIR /app
: Creates a working directory in the container.COPY . /app
: Copies all the files in the current project directory into the container working directory.RUN pip install -r requirements.txt
: Runs the usual command to install all the dependencies for the app, in the container environment.EXPOSE 8080
: Instructs the container environment to expose port 8080 of the container, for the world to listen.CMD streamlit run app.py --server.port=8080
: Once the container is set up, this command is executed in the container environment. Notice that this is the usual command to start a Streamlit app. We have only added an additional flag to instruct the Streamlit server to run on port 8080.
Build Docker Image
With the virtual environment active, choose a suitable name for the docker image and run docker build -t **<docker_image_name> .
** , in the terminal to build a docker image from the Dockerfile.
The dot(.)
at the end of the command instructs Docker to look for a Dockerfile in the current directory. The build process can take some time. Notice how it executes each instruction from the Dockerfile.

Running a Container
Once the image build is completed, you can run the app as a container, from the image using docker run -p 8080:8080 **<docker_image_name>
** .
This command tells the Docker Engine to run the Docker image and map port 8080 of the local machine to the container port 8080. This will allow us to navigate to http://localhost:8080/ on our browser, to view the output of the container (i.e. the Streamlit app).

Once you see the app running from the container, you can be certain that the app will run with no issues in any other machine with a Docker engine. We are finally ready to deploy the app to the App Engine! 👌
Important Side Note
The URL links printed in the terminal once the Docker container is running, will not work if you click on them. These are output messages from the Streamlit server, inside the container itself.
We only have access to what the container exposes to us, which is mapped to http://localhost:8080/ of our local machine.
Step 3: Deploy App to Google App Engine
Before deploying the app to App Engine, we need to add another file to the project directory: app.yaml. This is a configuration file that basically contains settings for the app engine.
We have kept it very basic for the simplicity of this article. You can learn more about configuring the app engine using this file from the guide here.
You can now deploy the app, from the virtual environment in the project directory, using the command gcloud app deploy
.

It should ask for some basic settings for deploying the app. Follow the prompts and the deployment should begin. This can take some time, and for good reasons.
App engine takes all the files in the current directory and pushes them to the cloud. It then builds a docker image of the app in the cloud, as per the instructions in the Dockerfile, and stores it in the Google Container Registry. It then uses this image to deploy a container (running instance of this image) on the App Engine.
Once deployed, you can see the app running on the internet using the target url.

Important Side Note
If you are doing this for learning purposes, make sure to stop running the app in the GCP Console App Engine dashboard.
Also, be careful about sharing the URL for your app. Otherwise, you can end up incurring unwanted costs from others visiting your app! Well, now you know why the blurred lines in the screenshots.😅

Conclusion
Congratulations !! You now know how to deploy your model as a serverless app on GCP. 😄
To learn how to deploy a machine learning model as a serverless endpoint, check out my article here. 😄