The world’s leading publication for data science, AI, and ML professionals.

3 Steps to Build and Deploy your NLP model as a Microservice on Azure

The easiest and cheapest way to deploy ML models on Azure

Photo by bruce mars on Unsplash
Photo by bruce mars on Unsplash

After spending countless hours training your model, you now need to make it available for other applications or services.

Depending on how you approach the deployment to the cloud, this process may take several hours or just a few minutes. More importantly, your deployment choice should be based on your scalability and budget requirements.

Here I will show a quick way to deploy an NLP model to Azure as a microservice directly using Python code (instead of building containers) for deployment.

This tutorial works for any ML model, not just NLP.

The setup I will show is probably one of the simplest available, and the settings will have a minimal cost to maintain.

Before you start, make sure to prepare as follows:

  1. Create an Azure account with a valid Subscription: If you don’t already have an account with a valid subscription, you can create a new account for Azure and receive two weeks of trial.
  2. Install Azure CLI: We will use commands to create the resources on Azure (instead of using the Azure Portal UI). This approach has the most maintainability, as we have scripts for each resource we create, making it easier to evolve and reproduce. Click here to see how to install Azure CLI.
  3. Install Azure Functions Core Tools: Before deploying our microservice to Azure, we will create and test everything locally without spending a dime. Azure Functions Core Tools will provide the local development experience for designing, developing, testing, running, and debugging Azure Functions. Click here to see how to install it.

Below we will go through the following three steps:

1. Create and test an Azure function locally
2. Create the resources on Azure
3. Deploy the function to Azure

1. Create and test an Azure function locally

Ideally, we want to test everything out locally before deploying to Azure. Local testing lets us make sure everything is working and won’t spend any unnecessary money debugging online. That being said, using monitoring tools like "Application Insights" on Azure is still worth it and necessary to make sure your apps are running smoothly. But that’s outside the scope of this post.

Below, first, we use the terminal to create and activate the python environment. Then we create a FunctionApp project locally, which will organize multiple functions together. Finally, we create the function getSentiment which will be triggered by an HTTP request.

# Create and activate an environment
python3 -m venv .venv
source .venv/bin/activate
# Create a FunctionApp Project Locally
func init --worker-runtime python
# Create a Function
func new --name getSentiment --template "HTTP trigger" --authlevel anonymous

Now we can edit the function in the file getSentiment__init__.py , adding the following code (modify for your model):

The function created above will receive a text parameter and return the input text with the corresponding Sentiment Analysis obtained from the Hugging Face’s model "DistilBERT base uncased finetuned SST-2".

Since we’ve added a few libraries for the code above, make sure to update your requirements.txt file as follows:

And after that, install the libraries on the environment we created above:

pip install -r requirements.txt

Now we are ready to test the function locally. To do this, you need to run:

func start

You should get something like this as the output in the terminal:

Example of output from "func start" command. Image by the author.
Example of output from "func start" command. Image by the author.

So we can go to the URL listed above, passing the parameter text to test the model. For example:

http://localhost:7071/api/getSentiment?text=I%20really%20like%20bananas

The output should be:

Example of output from a local function. Image by the author.
Example of output from a local function. Image by the author.

Now that everything is running locally as expected, we can create the resources needed on Azure and deploy our microservice.


2. Create the resources on Azure

You can do the following steps via the Azure Portal by clicking on each resource and choosing the settings. But that is hard to maintain. So, in general, it’s recommended to use scripts.

So below, we run a few commands in the terminal to create the following resources, which are the minimum needed to deploy a Function on Azure:

  • Resource Group: A resource group is just a way to hold multiple related resources for an Azure solution.
  • Storage Account: An Azure storage account centralizes data objects, such as blobs, file shares, queues, tables, and disks. It gives a unique namespace for storage. We will use the standard type (cheapest), mostly recommended for files, blobs, and tables.
  • FunctionApp: A function app is a resource that groups functions as a logical unit for easier management, deployment, scaling, and sharing of resources. We will use the most basic consumption plan to host the function app and specify the storage account created.
# Login to your Azure Account from the Command Line
az login
# Create a Resource Group
az group create --name rgSENT --location westus
# Create a Storage Account
az storage account create --name stracc2sent --location westus --resource-group rgSENT --sku Standard_LRS
# Create a FunctionApp
az functionapp create --name nlpfuncsa --resource-group rgSENT --os-type linux --consumption-plan-location westus --storage-account stracc2sent --functions-version 3 --runtime python --runtime-version 3.9

Please note that I used nlpfuncsaas the name of the FunctionApp. This name must be unique on Azure, so please use a different one for your app. If the command above returns Operation returned an invalid status 'Conflict' , this might be the reason. So make sure to use a different (and unique) name for your FunctionApp.


3. Deploy the function to Azure

Finally, we can deploy our local project’s code to the FunctionApp created on Azure, using the following command:

func azure functionapp publish nlpfuncsa

This process takes a while due to the remote building. In the end, you should get the following result:

Example of output after deploying function to Azure. Image by the author.
Example of output after deploying function to Azure. Image by the author.

Now you can go to the URL listed above, passing the parameter text to test your model. For example:

https://nlpfuncsa.azurewebsites.net/api/getsentiment?text=I%20really%20like%20bananas

The output should be the same that we saw locally:

Example of output from function already deployed on Azure. Image by the author.
Example of output from function already deployed on Azure. Image by the author.

That’s it. Now you have your NLP model deployed to Azure. Here’s the Github repository with all the code submitted to Azure.

If you want to delete everything you created, go to the Azure Portal, find "Resource Group", click on the resource group created (if you followed this post exactly, it should be "rgSENT"), then click on "Delete resource group". Since all of the resources created are under the same resource group, doing the above will delete everything.


If you enjoy reading stories like these and want to support me as a writer, consider signing up to become a Medium member. It’s $5 a month, giving you unlimited access to stories on Medium. If you sign up using my link, I’ll earn a small commission.


Related Articles