
Before we begin, I would strongly suggest you read part-1 if you have not yet done so. In this guide, we will go through how to deploy your model such that front-end developers can use it in their applications using a REST API without having to worry too much about the underlying details.

As we have already developed and deployed our model endpoint in the previous part, we will start by developing our lambda function.
AWS Lambda Functions
AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers, creating workload-aware cluster scaling logic, maintaining event integrations, or managing runtimes.
By using the AWS Lambda function we can avoid setting up dedicated servers to monitor incoming requests and execute the code. There are many benefits of this like we only pay for the compute every time the lambda function is triggered by incoming requests, instead of a dedicated server.
Follow the steps to create a lambda function that will process the incoming requests.
- Login to your AWS console and select lambda from the list of services.

- Create a new lambda function to begin coding.

- Add the name of your model endpoint as an environment variable, by clicking on the configuration tab and add a new variable, with key ‘ENDPOINT_NAME’ and value as the name of your endpoint developed.

- Put the below code into the code editor, make sure you replace the value for the variable "bucket" with your own variable so it points to the location where you saved the transformations during model development.
Note: You can find the s3 location in the notebook you created at the beginning of part-1.
import os
import json
import boto3
import pickle
import sklearn
import warnings
warnings.simplefilter("ignore")
# grab environment variables
ENDPOINT_NAME = os.environ['ENDPOINT_NAME']
runtime= boto3.client('runtime.sagemaker')
bucket = "sagemaker-ap-south-1-573002217864"
key = "cust-churn-model/transformation/transformation.sav"
s3 = boto3.resource('s3')
def lambda_handler(event, context):
payload = process_data(event)
response = runtime.invoke_endpoint(EndpointName=ENDPOINT_NAME,
ContentType='text/csv',
Body=payload)
result = json.loads(response['Body'].read().decode())
predicted_label = 'True' if result > 0.39 else 'False'
return predicted_label
def process_data(event):
trans = pickle.loads(s3.Object(bucket, key).get()['Body'].read())
event.pop('Phone')
event['Area Code'] = int(event['Area Code'])
obj_data = [[value for key,value in event.items() if key in trans['obj_cols']]]
num_data = [[value for key,value in event.items() if key in trans['num_cols']]]
obj_data = trans['One_Hot'].transform(obj_data).toarray()
num_data = trans['scaler'].transform(num_data)
obj_data = [str(i) for i in obj_data[0]]
num_data = [str(i) for i in num_data[0]]
data = obj_data + num_data
return ",".join(data)
- One last step before we can execute our lambda function, lambda function uses vanilla python3 for execution and does not have any of the libraries like Pandas, NumPy, or sklearn installed by default. So we need to add a sklearn layer so that our transformations can be loaded.
Now I will not be going into the details of how to create and to add a layer, as it is a separate topic, you can find more details about that here.
Once done, we are ready to test our lambda function, select the test drop down to configure a test case, and paste the below input data into it to run the test.
{
"State": "SC",
"Account Length": "15",
"Area Code": "836",
"Phone": "158-8416",
"Int'l Plan": "yes",
"VMail Plan": "no",
"VMail Message": "0",
"Day Mins": "10.018992664834252",
"Day Calls": "4",
"Day Charge": "4.226288822198435",
"Eve Mins": "2.3250045529370977",
"Eve Calls": "0",
"Eve Charge": "9.97259241534841",
"Night Mins": "7.141039871521733",
"Night Calls": "200",
"Night Charge": "6.436187619334115",
"Intl Mins": "3.2217476231887012",
"Intl Calls": "6",
"Intl Charge": "2.559749162329034",
"CustServ Calls": "8"
}
- Once you run the test, the lambda function will run the input data through the transformations into the endpoint we have deployed to get a response on our data.

Building REST API
An API is a set of definitions and protocols for building and integrating application software. It’s sometimes referred to as a contract between an information provider and an information user – establishing the content required from the consumer (the call) and the content required by the producer (the response).
In our case, the lambda function is the producer which uses the model endpoint to predict a score for input (the call) provided by the consumers, which can be any web application developed by front-end developers.
- Select "API Gateway" from the list of AWS services and select **** the "Create API" option to create a new REST API.

- From the list of API’s select REST API and click on build.

- Select New API, give it a nice name, and create. Make sure you leave the Endpoint Type as Regional.

- Click on the Actions dropdown and select "Create Resource". Create a new resource. Next, click on Actions again and create a new Post Method.

- Once you create a Post Method you will get an option to integrate the method with the lambda function you created, enter the name of your lambda function to continue.

Once you have created the API you will need to deploy it. As you can see the dashboard shows the architecture of your API and how it is integrated into your lambda function.

- You can deploy the API by clicking on the Actions tab and select the Deploy API option. This will provide you with a link that you can use to sent Post requests to your Model endpoint.

Bringing it All Together
Now the moment to test if our model results are available to the world through our deployment. We can use Postman to test our API.
Create a new test in Postman and paste the link you have created from your REST API and select Body as input type and POST as the request type and provide the input data.

Once you click on send, it will send a request to your API which will pass it to the lambda function to get a response.
Hurray!! you have built a complete model deployment pipeline end-to-end.

Conclusion
In summary, we have.
- Built a complete CI/CD compliant model development pipeline on AWS.
- Access model endpoint using AWS lambda function to perform preprocessing.
- Built a REST API to exposure our model to front-end applications.
- Tested our pipeline using Postman.
At this stage, you should take a moment to look back at all the things you have done and try to think of new cool experiments to make this even better.
If you found my work helpful, here is what you can do.
Post a comment to let me know your thoughts, or if you found any issues.
Share the article with your friends.