Read along to learn how to do this using a serverless function on the google cloud platform (GCP).

As a data scientist/engineer, I often have code that needs to run periodically. This could be anything from processing some log files every day at 02:00 pm or running a machine learning model every day at 01:00 am.
If it can run within a memory limit of 8 GiB and in under 9 minutes, then it’s probably worth implementing it as a serverless function.
If that’s of interest to you, then in this article, I will show you how to schedule your code using a serverless architecture, utilizing the Cloud Functions serverless compute product from Google Cloud Platform (GCP).
To learn more about Google Cloud Functions and its benefits, check out my other article on Medium. The first section explains it in a concise way. 😄
Machine Learning Model as a Serverless Endpoint using Google Cloud Functions
⚠️ Housekeeping ⚠️
This article assumes you already have a GCP account. If you don’t have one, signup here which comes with some free credits.
If you want to interact with your GCP account from your local machine, install the Google Cloud SDK using the steps outlined here.
Make sure to enable APIs for Google Cloud Storage, Functions, Pub/Sub, and Scheduler, in your GCP project using the API console.
All the code in this article was developed in Python 3.8. All the code necessary is made available via GitHub Gist.
Example Task & Solution Architecture
So, what are we building? For the purposes of this article, we are going to schedule a cloud function that prints money every 15 minutes. Not literal money of course 😄 , but the word money.

We are going to use 4 services from the Google Cloud Platform (GCP) to do this.
- Cloud Function: Oh Yes, this serverless compute service will host and execute all our code. This service will be triggered every 15min. When it executes, it will run our code that will write the word "money" to a text file and save it in the Google Cloud Storage bucket. It’s a simple task that can be easily adapted to your use case.
- Cloud Pub/Sub: This is an event-driven real-time messaging service that allows us to create systems that communicate asynchronously. It enables a system design where there are event producers and consumers, also known as publishers and subscribers. In our case, the Cloud Scheduler will produce a Pub/Sub event which will then trigger our Cloud Function consumer, who is listening for a particular topic from the Pub/Sub service.
- Cloud Scheduler: This is a fully managed enterprise-grade cron job scheduler from GCP. It can basically schedule anything really. In this case, we use it to generate a Pub/Sub event for a topic, every 15 minutes.
- Cloud Storage: Well…not much to say here really. It’s basically a location to hold any kind of data. it might not be as sexy as the others, but in my opinion, it’s the reliable backbone of everything on the GCP!
And now…we build! 🚀 🚧
Step 0: Create a Google Cloud Storage Bucket

In the GCP console, search for storage to find cloud storage and click create a bucket. Give the bucket an appropriate name and make sure to create the bucket in the same region as where you plan to run the cloud function. You can leave the rest of the settings as is and hit create.

Step 1: Create and Configure Cloud Function
Now search for cloud functions on the GCP console and click on create function. Give the cloud function an appropriate name and make sure it’s in the same region as the storage bucket.
Select the ** function trigger type to be Cloud Pub/Su**b.

Click on create a topic to create a new Pub/Sub topic which will trigger this cloud function. Give it an appropriate name and click create topic, followed by save to finalize the Pub/Sub trigger.

Now under runtime, build and connections settings section, leave the runtime tab settings as-is. Our function is quite simple. Therefore, an execution environment with 256 MiB memory is more than enough.
However, in the connections tab, select Allow internal traffic only. Do this for security reasons, as it will only allow traffic from within the project environment and cannot be triggered from a malicious external request.
Once done, click next to code the function.
Step 2: Code & Deploy the Cloud Function
You should now see the inline source code editor window. This is where we define our runtime environment and code the function to execute.
Select the Runtime environment as Python 3.8, as we will be coding in python.

As you can see, there are 2 files displayed below the Source Code Inline Editor tab. Let’s learn what they are.
File: main.py
This file is where all the function code resides and is executed when the trigger event happens. Since we have selected the trigger as Pub/Sub, there should be a single function in this file with the signature _hellopubsub(event, context), populated by default.
This is the signature for the main function that gets triggered by a Pub/Sub event. You can obviously change the main function name, but make sure to update it accordingly in the Entry point function name tab. This is how the environment knows which function to call to handle a Pub/Sub event.
For the purposes of this article, we will leave the name as is and just update the contents.
File: requirements.txt
This is where we declare the libraries required to be installed in the cloud function environment to execute our function. By default, the environment comes pre-installed with a bunch of libraries. Since this is a simple function, there are not many additional libraries that we need to install.
Code & Deploy
You can copy and paste the contents of these 2 files from the gist below. The code is self-explanatory and is commented. Please do reach out if you have any questions. 😃
Once done copying the code from the gist into the relevant files, you can hit deploy.
This will take some time, as the Cloud Function environment is being set up and all the requirements are installed. You should see a loading circle next to the function name.
A green tick mark will appear next to the function name when the function is successfully deployed. 👊

Well then…let’s check if it works! 😅 To do that, click the 3-dot button under Actions and click the Test function.
Step 3: Testing the Cloud Function

Since our cloud function does not require any input data or context, we can just click the blue test the function button, leaving the trigger event input blank as is.
When the function completes successfully, it should print OK in the log below. In case of error, read the logs for diagnosis.
We should also see a new text file in the cloud storage bucket, with money in it! 💲 💲 😄

Now all that’s left is scheduling the function to run periodically.
Step 4: Scheduling the Cloud Function
Back in the GCP console, search for Cloud Scheduler, and click create job. This should bring you to a setup page to configure the cron job.

Under Define the job section, give an appropriate name, and using unix-cron format, specify the scheduling frequency.
Since we want to schedule our function to run every 15 minutes, use the following: */15 . You can learn more about the format from here and adapt it to your needs.
Oh, and make sure you are in the correct time zone. 😅

Under Configure the job’s target section, select Pub/Sub topic as the target. This should bring up all the topics in the current project. Make sure you select the correct Pub/Sub topic, as created in Step 1.
It is mandatory to provide a message body. We are just going to say ‘hello’, although it does not get processed anywhere in our architecture.

Now hit create to schedule this job. This should now run our cloud function every 15 minutes.
However, we can click on run now in the scheduler jobs page, to run the function now.
You should be able to see a new money text file in the Cloud Storage bucket, created around the time you clicked run now. In case of an error, check the logs for diagnosis.
Now, after every 15 minutes from your last run, you should be able to see new money text files being added into the Cloud Storage bucket.

Congratulations, you have now learned to successfully scheduled a serverless function to execute periodically. 😄 🚀
⚠️ NOTE: Make sure to delete/pause the Cloud Scheduler job and other resources, in order to avoid incurring ongoing costs. 🔥
Final Thoughts
Usually, people (including me) jump into using VMs and other tools to run their cron jobs. But it might be worth revisiting those tasks and see if their memory/computation constraints fit within the limits of Cloud Functions.
From a Data Science system perspective, I can see so many tasks that can be adapted to use this architecture. Processing some CSV data files regularly, running model predictions, generating summary report tables daily, loading external files to big query tables daily, etc, to name a few.
Furthermore, having a serverless architecture like this enables a lot of flexibility and improves maintainability. Well… that’s what microservices architecture is all about. *** Terms and conditions apply. 😄 ***
Hopefully, this article gives you some food for thought in terms of your next solution architecture.
Thank you for reading.
I hope you found this article useful. Please reach out to me if you have any questions or if you think I can help. Always looking to connect with new friends. 😄
You might also like these articles by me:
The Only Data Science/Machine Learning Book I Recommend
Machine Learning Model as a Serverless App using Google App Engine
3 Simple Side Hustles to Make Extra Income per Month as a Data Scientist