
Machine learning Deployment is definitely a hot topic right now. I was about to use the so-hot-right-now meme to back my first sentence, however, I feel everyone in the industry agrees with me on this. There are many cool approaches to deploy and operationalize ML models. However, they mostly introduce new tools, with their own ecosystem to work in. This adds a new layer of complexity and eventually might create lock-in effects, as you need to adjust your code to optimally fit the platform.
Open source concepts, such as Docker and Kubernetes grant much more freedom on this matter, however, are quite complex to use. Thus, I built a Python lib that allows you to professionalize, test and deploy your model on Kubernetes, however you don’t have to leave Python for it. Sounds too good to be true? Well, let’s see what you think.
The productionize library
Apart from the shitty title of the package – suggestions are highly welcome – the package allows you to do some pretty neat things to ease and speed up your deployment experience.
Let’s first take a look at what the library with the catchy name does.
What does it do?
Brief answer: it allows you to containerize your ML model without leaving Python. On top of that, it builds a local Kubernetes cluster on your machine to test if the containerized model is working properly. If you are thinking, wait, I don’t have a local Kubernetes cluster, don’t worry, I built in some functions to take care of this. Let’s take a closer look.
To begin your journey with productionize, you can first set up your local "workbench" as I called it. This is pretty much a Minikube cluster with a Docker registry running in a VM on your machine. To anyone who has set up such a stunt before, knows that this is a bit annoying to do. With productionize this is two lines of Python code.

Once you have this "workbench" running, the productionize library expects a ML model that is already built into an API structure. You can use e.g. Flask for this, but also other tools. However, I would definitely recommend Flask, as I really like its lightweight approach.
With a few lines of Python code, you can then containerize your API and deploy it to the local workbench to test it. If everything is fine you can then push the container to any Kubernetes cluster you might like. So, let’s take a look at the code.
How can you do that?
First, you will need to install the library, which is hosted on PyPi. I am currently not supporting conda, please forgive me for that.
After that, you can import the library. Please be aware, that the library currently does not support Jupyter notebooks, as I am not using the Jupyter Kernel to setup the VM, the Minikube cluster and all the other nitty gritty components.
The library contains two major classes. The first one is the workbench()
class, which takes care of the tools you need on your machine to do all those amazing things. The second one is the product()
class, which handles your API and it’s containerization.
Let’s start with setting up the workbench. The workbench()
class will install, manage and uninstall Docker, VirtualBox, Kubectl and Minikube for you. First, we initialize the class. This will check your local machine for the said components and mark those missing to be installed.
Now we can install the missing components and configure them to work well with each other. In the next steps, you will probably need to enter your sudo password.
If something goes wrong in this step, I added a debug method for this. Let’s say the installation of Minikube failed, you can just run:
Once the setup step is completed, you can fire up the workbench, which essentially fires up the Minikube cluster on your machine.
Next, you should create a project on the workbench, which creates a Kubernetes namespace. I would advise you to do so, this creates some sense of order on your cluster.
If you are sick of your workbench you can easily stop the cluster using the stop_cluster()
method and you can even cleanly remove all the unwanted components using the uninstall()
method.
Let’s now move on to the product()
class. This class turns you ML API into a container and deploys it to the workbench. First, we have to initialize the class:
You can now prepare the deployment, which will essentially create a Dockerfile. I added this intermediary step so that you can edit the Dockerfile, if you need.
Whether or not you edited the Dockerfile, you can now deploy it to the local workbench using the deploy()
method.
This will create the Docker image on the Minikube registry, deploy a pod with your container and expose the service to external. In the output of the deploy()
method, you will find the link to your API. You can use this to test, whether your model works just fine. If it does, you can push it to any Kubernetes cluster you want. You can do so using the push_product()
method and pass the registry link of your Kubernetes cluster.
Okay, so that was a lot. I know this is a bit of a complicated structure, so I summarized it again.

There are additional information and documentation on GitHub and PyPi. You can check it out. Also, there are of course some funny easter eggs included.
What is next?
As you see, the library provides some nice features, however, it is far from being completed. At the moment, productionize only works on macOS. This is due to the fact, that I am using Homebrew as a main package manager. I am already working on a version for Ubuntu. Any linux distribution will be manageable, however, Windows is going to be quite a challenge. So if you have suggestion or ideas, please let me know. Also, I would be more than happy to work on any bugs, if you drop me an issue on https://github.com/LJStroemsdoerfer/productionize.