Deploy Machine Learning Models Right From Your Jupyter Notebook

Deploy machine learning models in one line of code

Avi Chawla
Towards Data Science

--

Photo by Roman Synkevych 🇺🇦 on Unsplash

Amidst this AI revolution, building intelligent systems at scale has been of great interest lately to countless organizations.

While plenty of time and energy is being actively spent in training large machine learning models, taking these models to production and maintaining them is a task of its own.

This, in some cases, may even require specialized teams.

And while more and more organizations are resorting to artificial intelligence (AI) to serve end customers, smooth deployment of these models remains somewhat cumbersome yet at the forefront of ensuring that the intended services are delivered as promised.

But have you ever wondered why deployment is a challenging process? If yes, let me help.

In this blog, I will provide a detailed overview of why ML deployment is typically a tedious process.

Moreover, I will share how you can simplify this process and deploy models from jupyter notebook using the Modelbit API.

Let’s begin 🚀!

What is Deployment?

For starters, deployment is the process of integrating a trained machine learning model into a production environment.

Deployment is the last stage in the development lifecycle of a machine learning product. This is when the model has been trained, validated, tested, and is finally ready to be served to an end user.

You can read my previous article on machine learning deployment here:

Pain Points of ML Model Deployment

#1) Consistency challenges

In almost all ML use cases, the algorithm used is typically never coded from scratch. Instead, one uses open-source implementations offered by libraries like PyTorch, Sklearn, and many more.

To ensure reproducibility in production, the production environment should be consistent with the environment it was trained in.

Dev and Production Environments (Image by Author)

This involves installing similar versions of libraries used, software dependencies, OS configurations, and many more.

Achieving this consistency can, at times, be challenging.

In fact, while writing the Heroku blog I mentioned above, I came across numerous errors and challenges when I was trying to deploy a machine learning model on Heroku, and overall, the process was a bit tedious and time-consuming to resolve, which I also discussed in that blog.

#2) Infrastructural challenges

ML models typically require specialized processors like GPUs for training.

Depending upon the complexity, a specialized infrastructure may also be needed during inference, i.e., post-deployment.

Setting up these specialized infrastructures is often challenging for data teams.

#3) Inadequate Expertise (or Knowledge Gap)

ML engineers may not have experience with deployment. They may not have the necessary expertise in areas such as software engineering, DevOps, and infrastructure management.

This can make it difficult for them to effectively deploy and scale models in production environments.

In such cases, organizations hire specialized talents.

However, engineers hired specifically for deployment may not have an in-depth understanding of ML algorithms and techniques.

Dev and Production Teams (Image by Author)

This makes it difficult for them to understand the code and make necessary optimizations, leading to issues with scaling, performance, and reliability, and can ultimately impact the effectiveness of the model in production.

Deploying ML models from Jupyter Notebook

The above pain points, to an extent, highlight the necessity for a data scientist to have the necessary deployment expertise.

Now, data scientists spend most of their time working in a Jupyter notebook.

Thus, to simplify the deployment process and integrate it with Jupyter to create a model endpoint, I will use the Modelbit API.

Workflow

Before building the application, it will be better to highlight the process workflow, which one can replicate in any of their projects.

The image below depicts a high-level diagrammatic overview of the steps involved in the deployment process.

Deployment workflow (Image by Author)

First, inside a jupyter notebook, we will train a machine learning model.

Next, we’ll create a prediction function, which will accept the input as its parameters and return the model’s prediction.

After that, we’ll gather the list of packages used along with their version and the python version we trained our model in. This info, along with the function object will be sent for deployment.

Finally, we will retrieve the model endpoint.

Let’s look at the steps below.

To reiterate, we will do everything from a Jupyter notebook.

Step 1: Training the Machine Learning Model

First, we’ll train a machine learning model we intend to deploy. For simplicity, let’s consider a linear regression model trained on the following dummy dataset:

Dummy dataset (Image by Author)

Next, we will train a linear regression model using scikit-learn:

## my_notebook.ipynb

from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(x, y)

We get the following regression plot:

Regression Fit (Image by Author)

Step 2: Setup Modelbit

#2.1) Install Modelbit

Firstly, install the Modelbit package via pip:

## my_notebook.ipynb

!pip install modelbit

#2.2) Login to Modelbit

To deploy models using Modelbit, create your account here. Next, login to Modelbit from Jupyter:

## my_notebook.ipynb

import modelbit
mb = modelbit.login()

And done!

Now, we can start pushing our models to deployment.

Step 3: Deploy Models

To deploy the model using Modelbit, we should set up a python function to ensure seamless deployment and inference post-deployment.

Essentially, this function will contain the code that will be executed at runtime, and it will be responsible to return the prediction.

We should specify the input parameters as needed by the model in this method. Also, you can name it anything you want.

Let’s create a my_lr_deployement() method.

## my_notebook.ipynb

def my_lr_deployement(input_x):

if isinstance(input_x, (int, float)): ## check input type
return model.predict([[input_x]])[0] ## prediction

else:
return None

Note: Every dependency of the function (model in this case) is pickled and sent to production automatically along with the function. Thus, you are free to reference anything in this method.

To deploy, run the following commands:

## my_notebook.ipynb

mb.deploy(my_lr_deployement)

That’s it! The model has been successfully deployed. A demonstration is shown below:

Deployment demonstration (Image by Author)

Once your model has been successfully deployed, it will appear in your Modelbit dashboard.

Deployment dashboard (Image by Author)

As shown above, Modelbit provides an API endpoint. We can use it for inference purposes.

## my_notebook.ipynb

!curl -s -XPOST "https://avichawla.app.modelbit.com/v1/my_lr_deployement/latest"
-d '{"data":[[1,input_x]]}' | json_pp

In the above request, data is a list of lists.

The first number in the list (1) is the input ID. The ID can be any identifier that you prefer to use. The numbers following the ID are the function parameters.

For instance, for our my_lr_deployement(input_x) method, the data list of lists will be as follows:

# Format: [id, input_x]

[[1,3],
[2,5],
[3,9]]

Let’s invoke the API with the above input:

## my_notebook.ipynb

!curl -s -XPOST "https://avichawla.app.modelbit.com/v1/my_lr_deployement/latest"
-d '{"data":[[1,3], [2,5], [3,9]]}' | json_pp

The endpoint responds with a JSON response:

{
"data" : [
[
1, # Input ID
12.41 # Output
],
[
2, # Input ID
19.33 # Output
],
[
3, # Input ID
33.16 # Output
]
]
}

Invoking the deployed model is not just limited to curl. We can also use the requests library in python:

## my_notebook.ipynb

import json, requests

requests.post("https://avichawla.app.modelbit.com/v1/my_lr_deployement/latest",
headers={"Content-Type":"application/json"},
data=json.dumps({"data":[[1,3], [2,5], [3,9]]})).json()

The output is a python dictionary:

{'data': [[1, 12.41], # [Input ID, Output]
[2, 19.33], # [Input ID, Output]
[3, 33.16]] # [Input ID, Output]
}

Custom Environments

Sometimes we may want to specify specific versions of the libraries used while deploying your model.

We can pass these as an argument to the md.deploy() method call:

## my_notebook.ipynb

mb.deploy(my_lr_deployement,
python_packages=["scikit-learn==1.1.2", "pandas==1.5.0"])

We can also deploy to a specific version of Python:

## my_notebook.ipynb

mb.deploy(my_lr_deployement,
python_version = "3.9")

Conclusion

To conclude, in this post, we learned how to deploy machine learning models right from a jupyter notebook using Modelbit API.

More specifically, I first demonstrated the training of a simple linear regression model, followed by integrating the Modelbit API into Jupyter notebook to deploy the model.

Thanks for reading!

--

--

👉 Get a Free Data Science PDF (550+ pages) with 320+ tips by subscribing to my daily newsletter today: https://bit.ly/DailyDS.