The world’s leading publication for data science, AI, and ML professionals.

Deploying an AI Edge App using OpenVINO

In this article we will learn how to deploy an AI Edge Application using Intel's OpenVINO toolkit.

In my previous articles, I have discussed the basics of the OpenVINO toolkit, OpenVINO’s Model Optimizer and Inference Engine. In this article, we will be exploring:-

  • Types of Computer Vision models.
  • Pre-trained models in OpenVINO.
  • Downloading Pre-trained models.
  • Deploying an Edge App using a pre-trained model.
Image: Source
Image: Source

Types of Computer Vision Models

There are different types of computer vision models which are used for various purposes. But the three main computer vision models are:-

  • Classification
  • Object Detection
  • Segmentation

The classification model identifies the "class" of a given image or an object in the image. The classification can be binary i.e. Yes or No, or thousands of classes like a person, apple, car, cat, etc.. There are several classification models like- ResNet, DenseNet, Inception, etc..

Object Detection models are used to determine the objects present in the image and oftentimes draw bounding boxes around the detected objects. They also use classification to identify the class of the object inside the bounding box. You can also set a threshold for bounding boxes so that you can reject low-threshold detections. RCNN, Fast-RCNN, YOLO, etc. are some examples of Object Detection Models.

Segmentation models perform pixel-wise classification in the given image. There are two different types of Segmentation- Semantic Segmentation and Instance Segmentation. In Semantic Segmentation, all the objects which belong to the same class are considered the same, whereas in Instance Segmentation each and every object is considered different even if it belongs to the same class. For example, if there are five people in an image, a Semantic Segmentation model will treat all five of them as same, whereas in Instance Segmentation model all five of them will be treated differently. U-Net, DRN, etc..

Pre-trained Models in OpenVINO

Pre-trained models, as the name suggests, are models which are already trained with high, or even cutting edge accuracy. Training a Deep Learning model requires a lot of time and computation power. Although, it is exciting to create your own model and train it by fine-tuning the hyperparameters (number of hidden layers, learning rate, activation function, etc.) to achieve higher accuracy. But, this needs hours of work.

By using pre-trained models, we avoid the need for large-scale data collection and long, costly training. Given knowledge of how to preprocess the inputs and handle the outputs of the network, you can plug these directly into your own app.

OpenVINO has a lot of pre-trained models in the model zoo. The model zoo has Free Model Set and Public Model Set, the Free Model Set contains pre-trained models already converted to Intermediate Representation(.xml and .bin) using the Model Optimizer. These models can be used directly with the Inference Engine. The Public Model Set contains pre-trained models, but these are not converted to the intermediate representation.

Downloading Pre-trained Models

In this article, I will be loading the "vehicle-attributes-recognition-barrier-0039" model from the open model zoo.

To download a pre-trained model, follow these steps(type the commands in Command Prompt/Terminal):-

  1. Navigate to the Model Downloader directory

For Linux:-

cd /opt/Intel/openvino/deployment_tools/open_model_zoo/tools/model_downloader

For Windows:-

cd C:/Program Files (x86)/IntelSWTools/openvinodeployment_tools/open_model_zoo/tools/model_downloader

I have used the default installation directory in the above command if your installation directory is different then navigate to the appropriate directory.

  1. Run the downloader.py

The downloader Python file requires some arguments, you can use the "-h" argument to see available arguments.

python downloader.py -h

Let’s download the model,

python downloader.py --name vehicle-attributes-recognition-barrier-0039 --precisions -FP32 --output_dir /home/pretrained_models
  • — name → model name.
  • — precision → model precision (FP16, FP32 or INT8).
  • — output_dir → path where to save models.

After successfully downloading the model, navigate to the path where you have downloaded the model and you will find the ".xml" and ".bin" files of the model.

Kindly refer the documentation to know more details(inputs and outputs) about the model.

Deploying an Edge App

Now, since we have downloaded the pre-trained model, Let’s deploy it in an Edge app.

Let’s create a file "inference.py" to define and work with the inference engine. In my previous article, about the inference engine, I have used different functions, but here I will be defining a class.

from openvino.inference_engine import IENetwork, IECore
class Network:
    def __init__(self):
        self.plugin = None
        self.network = None
        self.input_blob = None
        self.exec_network = None
        self.infer_request = None
    def load_model(self):
        self.plugin = IECore()
        self.network = IENetwork(model='path_to_xml', weights='path_to_bin')

        ### Defining CPU Extension path
        CPU_EXT_PATH=      "/opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/ libcpu_extension_sse4.so"   
        ### Adding CPU Extension
        plugin.add_extension(CPU_EXT_PATH,"CPU")
        ### Get the supported layers of the network
        supported_layers = plugin.query_network(network=network, device_name="CPU")    
        ### Finding unsupported layers
        unsupported_layers = [l for l in network.layers.keys() if l not in supported_layers]    
        ### Checking for unsupported layers
        if len(unsupported_layers) != 0:
            print("Unsupported layers found")
            print(unsupported_layers)
            exit(1)
        ### Loading the network
        self.exec_network =             self.plugin.load_network(self.network,"CPU")
        self.input_blob  = next(iter(self.network.inputs))
        print("MODEL LOADED SUCCESSFULLY!!!)
    def get_input_shape(self):
        return self.network.inputs[self.input_blob].shape
    def synchronous_inference(self,image):  
        self.exec_network.infer({self.input_blob: image})
    def extract_output(self):
        return self.exec_network.requests[0].outputs

Don’t get confused! I’ll explain every function.

  • init(self):

It’s the constructor of the class Network, where I initialize the data members of the class.

  • load_model(self):

As the name suggests, it is used to load the model(pre-trained), in this function we:-

▹ Declared an IECore object.

▹ Declare an IENetwork object.

▹ Loaded the model xml and bin files.

▹ Checked for unsupported layers

▹ Load the IENetwork object in IECore Object.

  • get_input_shape(self):

Returns the shape of the input required by the model

  • synchronous_inference(self, image):

Performs Synchronous Inference on the input image

  • extract_output(self):

Returns the output from the model after the inference is completed.

So, that was "inference.py", now let’s create a file "main.py".

import cv2
import numpy as np
from inference import Network
def preprocessing(image,height,width):
    ### Resize the Image
    image = cv2.resize(image,(width,height))
    ### Add color channel first
    image = image.transpose((2,0,1))
    ### Add Batch Size
    image  = np.reshape((image,(1,3,height,width))
    return image
  1. While resizing the image using the resize() of OpenCV, you should give the width first and then the height.
  2. According to the documentation, the model reads the channels first and then the image dimensions, but OpenCV reads the image dimensions first and then the channels, so I’ve used the transpose(), to bring the color channel first.
  3. The model takes the input as (batch_size, color_channels, height, width), so we reshape the image to give a "batch_size" which is 1.
def main():
    ### Read the image
    image = cv2.imread('path_to_image')
    ### Declare a Network Object
    plugin = Network()
    ### Input shape required by model
    input_shape = plugin.get_input_shape()
    height = input_shape[2]
    width = input_shape[3]
    ### Preprocess the input    
    p_image = preprocessing(image,height,width)
    ### Perform Synchronous Inference
    plugin.synchronous_inference(p_image)
    ### Extract the output
    results = plugin.extract_output()

According to the documentation, the output(results) from the model is a dictionary, which contains the following information:-

  1. "color", shape: [1, 7, 1, 1] – Softmax output across seven color classes [white, grey, yellow, red, green, blue, black]
  2. "type", shape: [1, 4, 1, 1] – Softmax output across four type classes [car, bus, truck, van]

Since it is a softmax output, we need to map the index of the maximum value with the color and the type.

    color = ['white','grey','yellow','red','green','blue','black']
    vehicle = ['car','bus','truck','van']
    ### Finding out the color and type
    result_color = str(color[np.argmax(results['color'])])
    result_type = str(vehicle[np.argmax(results['type'])])
### Add details to image
    font = cv2.FONT_HERSHEY_SIMPLEX
    font_scale = 1
    col = (0, 255,0) #BGR
    thickness = 2    
    color_text= 'color: '+result_color
    type_text = 'vehicle: '+result_type
    cv2.putText(image,color_text,(50,50), font, font_scale, col, thickness, cv2.LINE_AA)
    cv2.putText(image,type_text,(50,75), font, font_scale, col, thickness, cv2.LINE_AA)
    ### Save the image
    cv2.imwrite('path/vehicle.png',image)
if __name__=="__main__":
    main()

I tried for two vehicles and I got the following output:-

Source: Author
Source: Author

Well, That’s all folks. I hope by now you have a proper understanding of how to deploy an AI edge application using OpenVINO. OpenVINO has various pre-trained models for several applications. Try implementing different pre-trained models available in the OpenVINO model zoo and create your own edge application. Thank you so much for reading my article.


Related Articles