Source: maxpixel.net

Data Science for Startups: Model Services

Ben Weber
Towards Data Science
10 min readJul 18, 2018

--

Part two of my data science for startups series focused on Python.

In order for data scientists to be effective at a startup, they need to be able to build services that other teams can use, or that products can use directly. For example, instead of just defining a model for predicting user churn, a data scientist should be able to set up an endpoint that provides a real-time prediction for the likelihood of a player to churn. Essentially, the goal is to provide a model as a service, or function call that products can use directly.

Given the small size of a startup, it’s useful for a data scientist to be able to stand up and support an endpoint without needing engineering support. One of the ways that teams can accomplish this functionality is using services such as AWS Lambda, or GCP’s cloud functions. These are often referred to as serverless computing, but another way of viewing them is as functions as a service. Tools like AWS lambda enable data science teams to set up services that can be customer facing, while minimIzing the overhead involved in supporting a live service.

I covered some alternative approaches to these services in my model production post, which discussed how to use Jetty to set up a model prediction as a web endpoint. The main issue with this approach is that the data science team now needs to maintain a web server, which may not be part of the team’s expertise. I also discussed the use of PubSub for near real-time predictions, but this approach is not suitable for providing an endpoint that requires millisecond latency for generating predictions.

This type of capability, providing model predictions with sub-millisecond latency, can be categorized as providing models as a service. AWS lambda provides a great way of implementing these capabilities, but does require some set up to get working with common ML libraries. The goal of this post is to show how to use AWS lambda to set up an endpoint that can provide model predictions. It can be used with most scikit-learn models that can be serialized using pickle. I first discuss setting up a function exposed on the open web, and then show how to package up sklearn predictions as functions.

This posts builds on the previous AWS setup discussed in my prior post. It assumes that you have an AWS account set up and have assigned a role with S3 access. The full source code for this this tutorial is available on GitHub.

Creating a Test Function

AWS lambda enables teams to write functions that services or web clients can invoke, without needing to set up any infrastructure. It’s called serverless, because teams focus on writing functions rather than building systems. To start, we’ll set up a Hello World function that parses input parameters and returns the parameter as part of the response. For this tutorial, I am focusing on web requests, where the parameters are input as part of a query string, and the response is a web request where the body contains html content. We’ll first set up a test function and then use an API Gateway to expose the function to the open web.

The first step is to log into the AWS console and then drill down into the lambda screen. To start, we’ll create a new function using Python 3.6 and the inline editor, as shown below.

Creating our first lambda function

For simple functions without external dependencies, you can use the “edit code inline” functionality to author your lambda function directly in the AWS web interface. For more complicated functions, we’ll need to write code locally or on an EC2 instance, and then upload the packaged up function.

After creating and saving a function, you should be able to test it using the “Test” button. This will prompt a dialog where you can configure parameters to send to the function, which we’ll leave blank for now, but modify for the next step. Go ahead and save the default configuration and then click “Test”. The result should look like the dialog below.

A successful run of the Hello World function.

The function prints “Hello from Lambda” to the console, indicating a successful invocation of the function. As a next step, we’ll want to use parameters in the function, so that later on we can feed these as inputs to a model. For this tutorial, we’ll use query string parameters that are appended to a web POST command, but many different configurations are possible with lambda. It’s common to put services in from of lambda functions, which requires a different approach to using parameters not covered in this post.

We’ll make a few modifications to the default Hello World function defined by AWS. I’ve added a print statement of the event object, appended the msg parameter to the end of the Hello statement, and modified the return statement to return a web response rather than a string. Here’s the code for our new function:

def lambda_handler(event, context):
print(event)
esult = 'Hello from ' + event['queryStringParameters']['msg']
return { "body": result }

If you try to run this code block, you’ll now get an error. The function tries to retrieve the msg parameter from query string parameters, which will raise an exception since it’s not defined. In order to invoke this function, we’ll need to update our test event to provide this parameter as follows:

{
"queryStringParameters": { "msg": "Data Science for Startups!"}
}

If you test the function again, you’ll now get a successful response:

{
"body": "Hello from Data Science for Startups!"
}

I’ve wrapped the response as a web response, because we want to open up the function to the open web, and if the return statement provides only a string the lambda function will not be usable as a web call.

Setting up an API Gateway
We now have a lambda function that can be used within our virtual private cloud (VPC), but isn’t open to the web. In order to setup the function as a web call, we’ll need to configure a API Gateway which exposes the function to the open web. With lambda, you can use the same gateway across multiple functions, and in this post we’ll use the same gateway for the test and predictive model functions.

Setting up an API Gateway

The image above shows the GUI for adding an API Gateway to a lambda function. You’ll need to set a few parameters for the gateway, here’s what I used to configure my set up:

  • “Create a new API”
  • API Name: “staging”
  • Deployment stage:”staging”
  • Security: Open

Once you’ve set these parameters, you’ll need to hit save button again. Then you can click on the gateway to configure your setup, as shown below.

Configuring an API Gateway

AWS lambda provides a few ways of testing your functions before you deploy them into the wild. The image above shows a subset of the components that your function will use when making a call, and clicking on “TEST” provides a way of testing this function via the gateway.

Testing the API Gateway

We can now simulate calling the function on the web. Select “POST” as the method and set the msg parameter to test the function. When you click on test, you should get a result like the dialog shown above.

Now that we’ve tested the API Gateway, we can finally deploy our function to the world. After clicking on “Actions” -> “Deploy API”, you should get a dialog listing the URL of your gateway. If you click on the gateway URL, you’ll get an error, because you need to add the function name and the msg parameter. Here’s the URL of my endpoint, and how to call it from the web:

# API Url
https://vkdefzqrb8.execute-api.us-east-1.amazonaws.com/staging
# Useable endpoint
https://vkdefzqrb8.execute-api.us-east-1.amazonaws.com/staging/lambdaTest?msg=TheWeb

After all of that setup, we now have a lambda function that we can call from the open web. Go ahead and try it out! The next step is to author a function that provides model predictions in response to passed-in parameters.

Using SKLearn

If you want to use external libraries, such as sklearn when defining a lambda function, then the process is a bit more complicated than what we just covered. The key difference is that you cannot use the inline code editor, and instead need to set up a directory with all of the dependencies needed to deploy the function. Here’s the general process for creating Python lambda functions that rely on external libraries:

  1. Create a working directory on your local machine (or EC2 instance)
  2. Use pip -t to install libraries to this directory
  3. Add all code and assets to your working directory
  4. Zip the contents of this directory into a .zip file
  5. Upload the .zip file to S3
  6. Define a lambda function using a .zip file upload from S3

You can do this on your local machine, or use an EC2 instance to accomplish this task. Since I’m using a Windows laptop, I prefer the EC2 route. I’ve discussed EC2 setup in my prior post on Python. Here’s the steps I used to set up my environment:

# set up Python 3.6
sudo yum install -y python36
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
sudo python36 get-pip.py
pip3 --version

The next step is to set up external libraries in a local directory. By installing modules this way, you can include all dependencies when defining a lambda function to create a model prediction. Some of the libraries I like to use are pretty heavyweight (50MB+), but lambda now supports these without any issues, up to 128MB. In the past you had to build scikit-learn with a special setup to meet the payload requirements for lambda.

# install dependencies to a target directory 
mkdir classifier
pip3 install scipy -t classifier
pip3 install pandas -t classifier
pip3 install sklearn -t classifier
cd classifier

We’ve now set up the libraries for our python model script. The next step is to train a model that we’ll save as a pickle file. I ran the following script to output a model file (logit.pkl) that we’’ll use in our lambda function.

import pandas as pd
from sklearn.externals import joblib
from sklearn.linear_model import LogisticRegression
df = pd.read_csv(
"https://github.com/bgweber/Twitch/raw/master/Recommendations/games-expand.csv")
y_train = df['label']
x_train = df.drop(['label'], axis=1)
model = LogisticRegression()
model.fit(x_train, y_train)
joblib.dump(model, 'logit.pkl')

The next step is to define the model prediction function that we want to expose via lambda. I created a new file, logit.py, which includes the prediction function we want to enable as an endpoint:

from sklearn.externals import joblib
import pandas as pd
model = joblib.load('logit.pkl')def lambda_handler(event, context):
p = event['queryStringParameters']
print("Event params: " + str(p))
x = pd.DataFrame.from_dict(p, orient='index').transpose()
pred = model.predict_proba(x)[0][1]
result = 'Prediction ' + str(pred)
return { "body": result }

It’s also useful to test the code locally before creating a zip file and uploading via S3. I used this snippet to test the prediction function:

event = { 'queryStringParameters': {'G1':1, 'G2':0, 'G3':1, 'G4':1,
'G5':0, 'G6':0, 'G7':1, 'G8':0, 'G9':1, 'G10':0 }}
lambda_handler(event, "")

We now have a function and environment setup that we want to upload to lambda. I used the zip command and the AWS CLI to upload the file to S3. To use this command, you’ll first need to run aws configure.

zip -r logitFunction.zip .
aws s3 cp logit.zip s3://bucket/logitFunction.zip

We now have a function that we’ve tested and packed as a zip file to S3.

Deploying the Function
We’ll follow the same steps as before to set up and deploy the function, with one main change. Instead of using the “Inline Code Editor”, we’ll now use “Upload a file from Amazon S3” and select our zip file. We’ll also need to specify an entry point for the function, which is a combination of the file name and the function name. I used: logit.lambda_handler.

You can test the function using the same steps as before, but we’ll need to include a few more parameters. The model takes 10 parameters as input:
G1 — G10. I updated the test event to input the following parameters:

{ "queryStringParameters": {"G1":1, "G2":0, "G3":1, "G4":1,
"G5":0, "G6":0, "G7":1, "G8":0, "G9":1, "G10":0 }
}

Calling the function now returns a model prediction result:

{
"body": "Prediction 0.10652960571858641"
}

The last step is to reuse the API gateway from before and to deploy the API. Once everything is set up, you should be able to invoke the model over the API as follows:

https://vkdefzqrb8.execute-api.us-east-1.amazonaws.com/staging/logitModel?G1=1&G2=1&G3=1&G4=1&G5=1&G6=1&G7=1&G8=1&G9=1&G10=1

You’ll also want to enable throttling options to make sure that the endpoint is not abused. We now have an endpoint set up that can be used when building products. This tutorial set up the lambda function as a web endpoint, but many other configuration options are possible.

Conclusion

It’s useful to be able to set up models as an endpoint that different services or products can invoke. A predictive model can be used directly within a product, such as determining if an item should be upsold to a user in a mobile game, or used through other services, such as an experimentation platform that determines which segment to assign a user.

This tutorial has shown how AWS lambda can be used to deploy a predictive model built with the scikit-learn library. Since this library provides a wide variety of predictive models, this same configuration can be used for a number of different use cases. The key benefit with services like lambda, and Cloud Functions on GCP, is that they provide functions as a service. This means minimal operational overhead is required to maintain the service. They enable data science teams to deploy endpoints that can be used in products.

This post shows how to use a train model that is packaged as part of the uploaded zip file. One of the extension that is commonly used is reading model files from S3, so that new models can be deployed without needing to redeploy the API Gateway.

Ben Weber is a principal data scientist at Zynga. We are hiring!

--

--