From training to deployment: Stop biting your nails with machine learning

Learn how to train a machine learning model with only a few clicks, put it into practice in a simple web application, and deploy it to the cloud to share it with others.

Guilherme Caponetto
Towards Data Science

--

To accomplish all that, we will be using Teachable Machine, ReactJS, GitHub Actions, and the Developer Sandbox.

Photo by Patrick Fore on Unsplash

Introduction

Teachable Machine is a mind-blowing tool that provides an easy way to train and test machine learning models with no expertise in the area required. Datasets can be created right from a webcam or a microphone, and be used to train image, sound, or pose classifiers.

The training itself is automatically done by the tool, and the resulting model can be tested right after the training is completed. If you are happy with the model you just built, you can export it to Tensorflow formats and employ it in your application.

Here is a video showcasing Teachable Machine that is worth watching.

Now that you got to know Teachable Machine better, I believe you are already burning your brain out by thinking about many applications that could be done with such an amazing tool. Check out this repository that provides a curated list of projects built with Teachable Machine.

Thus, I would like to show how to build a simple web application using a model trained with Teachable Machine. This application will be also deployed to the cloud to make it a more realistic application and enable you to share it with others.

And don’t worry, we will use only free stuff. Also, everything will be client-side only, which means that no personal data will flow to the servers.

Let’s dive in!

Walkthrough

Step 1: Think about an application

Our application will be a nail-biting detector!

The purpose of this detector is to monitor the user through their webcam while they are working/studying/playing/etc, and fire annoying alerts while fingers are close to the mouth. Hopefully, this will map an unpleasant trigger into the user's brain, which will make them think twice when moving the fingers toward the mouth again.

Since we will build a web application and deploy it to the cloud, almost any device that has a camera and a browser can be used to do such modern nail-biting monitoring. However, the focus will be on people sitting in front of their computers, i.e. facing their webcams, which makes our model simpler to be built.

I understand that some people struggle to get rid of this compulsive habit, so let’s try to build something that could potentially help others, right?!

Step 2: Train the machine learning model

Training our nail-biting detector is easy with Teachable Machine.

Using the webcam, we need to gather images for two classes: (1) nail-biting class, being represented by images where fingers are close to the mouth; (2) normal class, being represented by images where fingers are away from the mouth.

So, once you get started with Teachable Machine (go to Get StartedImage ProjectStandard Image Model), keep pressing the record button until you get around 500 images per class, which should be more than enough for this classifier.

It is important to vary the position (hands and face mainly) while collecting the images to create diversity in the dataset. Also, include some images where there is no one in front of the camera for the normal class, just to make sure the model understands that nobody in front of the camera means everything is fine. You can even create a nobody class!

Building the training dataset with images taken from the webcam

Once the dataset is gathered, it is just a matter of training the model and testing it. If the results look good, then export the trained model to Tensorflow.js format. The output will be a zip file containing two JSON files (model.json and metadata.json) and the weights (weights.bin).

Testing the resulting trained model

Note: Keep in mind that the resulting trained model is very simple and should probably work best with the same person who was recorded while creating the dataset and on similar conditions. Nevertheless, you can improve it with more elaborated training, such as gathering more data and tuning the training hyper-parameters.

Step 3: Create the web application

I’m not going to go over implementing a ReactJS application from scratch since this is not the goal of this post. Instead, I have already prepared a boilerplate code for our web application. It is a simple ReactJS application that I called Teachable Machine Playground, where you can easily add your new pages that exercise your new models.

The code is open-source and available on this GitHub repository for you to clone or fork. You will find a simple ReactJS application with some components ready to be used. Scripts and GitHub workflows are also available to make life easier.

Before going further, clone the repository and build the application. Make sure you have Node.js and Yarn installed. You can simply run yarn install && yarn build:dev in the root folder of the application to build the code. This command will install all dependencies if they are not already installed yet and build the code in development mode.

Now let’s check out how we would add a new model to this code.

3a) Add the trained model files

Unzip the file that you exported from Teachable Machine and place them in a new folder under the/static/models folder.

In my case, I made a folder named nail-biting-detector and put the model files in there.

Adding the trained model files

3b) Configure the routes

Look for the Routes.tsx file (/src/app/Routes.tsx). You will have to add entries to the nav and models objects corresponding to your new page route and model files, respectively. These entries will allow you to create a new routed page for your model and make it easy to access the model files.

In my case, I added entries with the nbd key, which is short for nail-biting detector. When all steps are done, I can access my new page through the /nail-biting-detector route.

Creating routes for the page and model files

3c) Set up the model information

After adding the model files as part of the application and configuring the routes accordingly, let’s create an object that will contain some useful information about the model and will make loading the model easy in the steps ahead.

Look for the Model.tsx file (/src/app/Model.tsx) and export a ModelDescriptor const. In my case, I added the NAIL_BITING_DETECTOR_MODEL const with the information related to the Nail-Biting Detector model.

Setting up the model information (only showing the relevant part of the code)

3d) Add the model page

Now that the routes are ready, you just need to implement the logic of your page under the /src/pages folder. I’ve prepared some components and hooks to make it easy to develop pages that use the webcam and fire notifications.

In my case, the page NailBitingDetectorPage simply renders the webcam and fire annoying notifications (browser and sound) when the class nail-biting is detected. Pretty simple, huh?!

Note: The notification behavior of the implementation could vary depending on OS/browser. I have validated on Ubuntu/Chrome, Android/Chrome, Windows/Chrome, and macOS/Safari.

Creating the NailBitingDetectorPage (only showing the relevant part of the code)

Finally, you need to go to the App.tsx (/src/app/App.tsx) and map the route with the page so it can be accessible.

Mapping the route with NailBitingDetectorPage (only showing the relevant part of the code)

3e) Build and run

Once all previous steps are done, you are ready to try it out!

Execute yarn install && yarn build:dev in the root folder of the application to build the code. This command will install all dependencies if they are not already installed yet and build the code in development mode.

Then, execute yarn start to start the development server and open https://localhost:9001 in your browser. You can even load the application on other devices that are connected to the same network but you will need to use your IP address rather than localhost.

Teachable Machine Playground home page

Note: Every time you add a new route, it will be shown on the home page as a link to be navigated to.

Step 4: Build and push the container image

For this step, you will have to build the production code withyarn build:prod, build the container image with yarn build:image, and push it to some registry such as quay.io or docker hub.

I’m using podman for building the image but feel free to use docker if you prefer. You will have to do some minor changes in the yarn build:image script (/package.json) if you are not using podman.

To customize the image registry, name, or tag, you can export the following pre-defined environment variables before building the image:

IMAGE_REGISTRY (default value: quay.io/caponetto)

IMAGE_NAME (default value: teachable-machine-playground)

IMAGE_TAG (default value: latest)

As you can see, using the default values will prepare a container image named teachable-machine-playground to be pushed to myquay.io account under the tag latest, which is available here.

If you plan to push the code to your GitHub account, I also prepared a GitHub workflow (/.github/workflows/publish.yml) that builds and pushes the container image to quay.io. You just have to customize the environment variables and add your quay.io password as a repository secret (REGISTRY_PASSWORD).

Environment variables for pushing the container image to quay.io (only showing the relevant part of the code)

Step 5: Deploy your web application to the Developer Sandbox

Now, let’s create an application on the Developer Sandbox for Red Hat OpenShift (or simply Developer Sandbox) using our container image that has been published to quay.io.

If you are not familiar with the Developer Sandbox, here is a quote from the official website that pretty much sums up all its capabilities:

The sandbox provides you with a private OpenShift environment in a shared, multi-tenant OpenShift cluster that is pre-configured with a set of developer tools. You can easily create containers from your source code or Dockerfile, build new applications using the samples and stacks provided, add services such as databases from our templates catalog, deploy Helm charts, and much more.

In addition, if you want to learn more, check out a talk that I gave where I show how to connect the Developer Sandbox with the Managed Kafka and DMN.

Once you are logged in to your Developer Sandbox instance, go to the Developer perspective and click Add. You will be presented with various ways to create an application. Then click on theContainer images card and fill-up the form with the appropriate information.

You will have to provide the image URL and secure your route to be available on HTTPS (mobile notifications only work over HTTPS). The GIF below shows how to do it in detail.

Deploying the container image to the Developer Sandbox

Alternatively, you can deploy your image using the GitHub Workflow that I mentioned earlier (/.github/workflows/publish.yml). In this case, you have to set up secrets that are associated with your OpenShift instance (OPENSHIFT_SERVER and OPENSHIFT_TOKEN) and the namespace (OPENSHIFT_NAMESPACE).

Note: The token from the Developer Sandbox expires daily.

Environment variables for deploying to OpenShift (only showing the relevant part of the code)

Once this information is ready, you can select the “deploy” checkbox when triggering the Publish workflow. In addition to building the web application and pushing the container image to quay.io, the workflow will also deploy the application on your OpenShift instance. Automation ❤

Okay, we are almost there! Now that you’ve created the application from the container image, it is just a matter of waiting a few seconds for the application to be up. Once it is ready, you can access and test your application.

Testing the deployed application

As you can see, the browser fires a desktop notification (partially shown at the top of the GIF) as soon as I move my hand near my mouth. You could not hear it but the browser fires an annoying sound alert as well.

If you build this application and want to monitor yourself, leave this browser window open and do your things normally. You will be notified if you get distracted and place your hands near your mouth. :D

IMPORTANT: Once again, if you intend to use this application as your personal nail-biting detector, be sure to load the application in a dedicated browser window. You can use your computer normally (opening other windows above the application window) as long as you do not minimize the application window. The webcam stops taking frames if you minimize the browser window or switch tabs in the same browser window.

Note: The Developer Sandbox keeps your applications up for 8 hours. After this time period, you have to scale them up again.

Final remarks

In this post, I’ve shown how easy it is to create web applications that include capabilities derived from machine learning models. I intend to motivate people to think about the applicability of machine learning models rather than dedicate time to the coding itself. That’s why I chose Teachable Machine and provided a boilerplate code (but kept it as simple as possible).

Hopefully, you can exercise some ideas and easily create proofs of concept with these tools. If they turn to be applicable, then you can focus more time on the coding part (training a more advanced model and building a nice-looking application).

I will try to think of more applications and add them to the repository. Also, it would be nice to have desktop and mobile versions of the playground so we can have the full power of the OS.

Maybe we can trigger something on the device when snapping fingers, clapping hands, or even doing some gestures? That old phone that you have in the bottom of the drawer could be turned into a monitoring butler that reacts with some gesture or sound that you do! It would be awesome to automate stuff and save time, wouldn’t it?

So make sure to try out your ideas and share the results!

Note: If you spot any issues or want to contribute to the code, don’t hesitate to open an issue, discussion or pull request. You will be more than welcome! Also, here’s my personal website where you can find my social media links to connect with me.

And that’s all for today. Thanks for reading! 😃

--

--