Robot following a walkway with OpenCV and Tensorflow

How to make a self driving robot with Raspberry Pi, computer vision and convolutional neural network.

Constantin Toporov
Towards Data Science

--

After my robot learned how to follow a line, there is a new challenge appeared. I decided to go outdoor and make the robot move along a walkway. It would be nice if a robot follows the host through a park like a dog.

The implementation idea was given by Behavioral cloning. It is a very popular approach for self-driving vehicles when AI learns on provided behavioral input and output and then makes decisions on new input. There is an article from Nvidia where they introduced this method.

Many good articles are describing this idea:

Even more exciting implementations are in real life. The best example is DonkeyCar and its neural network.

Unfortunately a naive approach — to train a neural network on color photos — did not succeed. Park photos in a fall are mostly gray, so NN trained on resized and blurred pictures provided anything but reliable results.

To simplify the task for AI I decided to preprocess the images with computer vision technics. OpenCV library has many abilities and worked fine when I need to detect a white line on the floor.

This task turned out to be not so easy. The pictures were mostly gray, and the main question was “which gray is for the walkway”. Observations discovered the walkway was like “perfect gray” — with minimal difference between its RGB values.

Brightness was another criteria. It was hard to determine walkway brightness automatically, so the first picture was used to adjust color parameters. Then color filtering applied based on the parameters, and we have got a mask with the walkway contour.

The shape of the contour was not precise and depended on color filtering parameters.

The next step is to make a decision (go straight or turn left/right) based on walkway mask. The general idea of the classification is to look at the right edge of the walkway: if the edge more on the left — then we need steer left. If there is no edge — steer right. If the edge goes from the right bottom corner up with a moderate angle — do nothing, just drive.

Considering indistinct blob shapes, it was hard to recognize the edge features using just geometry. Then I relied on neural networks to find patterns. So I put the walkway masks in three different folders and trained NN on them.

Examples of left, right and straight masks:

In terms of machine learning, this is a task of images classification with three classes. Grayscale images were ideal input data, so even the simplest convolution network shown excellent results. I used Keras over Tensorflow to teach the net.

model = Sequential()activation = "relu"model.add(Conv2D(20, 5, padding="same", input_shape=input_shape))
model.add(Activation(activation))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Conv2D(50, 5, padding="same"))
model.add(Activation(activation))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Flatten())
model.add(Dense(500))
model.add(Activation(activation))
model.add(Dense(cls_n))opt = SGD(lr=0.01)model.add(Activation("softmax"))model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"])

Next challenge was to run everything on Raspberry Pi.

I was using Raspbian 8 Jessie with old Tensorflow, built by Sam Abrahams and OpenCV 3.4 (I had to build it on my own). That version of Tensorflow was pretty old and was not able to work with Keras models.

Fortunately, recently Google supported Raspberry Pi for Tensorflow, but it requires Raspbian 9 Stretch and python 3. So I had to migrate all robot firmware to the new platform. OpenCV also developed during that year and I had to build OpenCV 4.0.

Put everything together — the robot is walking in a park.

Conclusion

The hardest part is to recognize the road. An absence of lines makes this task difficult. Recognition parameters have to be tweaked depending on light and local conditions. But the greyscale masks are the perfect material to teach even a simple CNN to get predictable results.

Links

GitHub repo with the complete tank firmware on python

Data and code for the neural network

Prebuilt OpenCV deb packages for Raspbian

Tank assembly instruction

More info about the tank

--

--