Indoor Robot Localization with SLAM

Simultaneous Localization and Mapping (SLAM) Navigation using RP LIDAR A1 on Raspberry Pi with remote ‘point cloud’ visualization using MQTT

Anand P V
Towards Data Science

--

Robots increasingly replace humans at factories or hotels. But how can robots dynamically navigate indoors without knowledge of the environment? If you already have a map, then you know where you are. Otherwise, you need to generate the map (mapping) and also understand your location on the map (localization). Imagine how hard that can be, even for a human, at an unfamiliar place.

SLAM (Simultaneous Localization And Mapping) algorithms use LiDAR and IMU data to simultaneously locate the robot in real-time and generate a coherent map of surrounding landmarks such as buildings, trees, rocks, and other world features, at the same time.

This classic chicken and egg problem has been approximately solved using methods like Particle Filter, Extended Kalman Filter (EKF), Covariance intersection, and Graph SLAM. SLAM enables accurate mapping where GPS localization is unavailable, such as indoor spaces.

Reasonably so, SLAM is the core algorithm being used in autonomous cars, robot navigation, robotic mapping, virtual reality and augmented reality. If we can do robot localization on RPi then it is easy to make a moving car or walking robot that can ply indoors autonomously.

First, let’s discuss Graph SLAM and do a custom implementation. Then we will attempt to integrate RPi with RPLidar A1 M8, running on a 5V 3A battery, and do SLAM along with visualization of the LIDAR point cloud map to assist navigation or even to generate a floor map. Finally, the LIDAR point cloud map is visualized on a remote machine using MQTT.

Project Demo: RPLidar A1 integrated with RPi has BreezySLAM deployed

Graph SLAM

Assume a robot in the 2D world, tries to move 10 units to the right from x to x’. Due to motion uncertainty, x’ = x + 10 may not hold, but it will be Gaussian centered around x + 10. The Gaussian will peak when x’ approaches x + 10

Image by Author: Robot movement from x0 to x1 to x2 is characterized by 2 Gaussian functions

If x1 is away from x0 by 10 units, the Kalman Filter models the uncertainty using the Gaussian with (x1 – x0 –10). Hence, there is still a probability associated with locations < 10 and > 10.

There is another similar Gaussian at x2 with a higher spread. The total probability of the entire route is the product of the two Gaussians. We can drop the constants, as we just need to maximize the likelihood of the position x1, given x0. Thus the product of Gaussian becomes the sum of exponent terms, i.e. the constraint has only x’s and sigma.

Graph SLAM models the constraints as System of Linear Equations (SLEs), with a Ω matrix containing the coefficients of the variables and a ξ vector that contains the limiting value of the constraints. Every time an observation is made between 2 poses, a ‘local addition’ is done on the 4 matrix elements (as the product of gaussian becomes the sum of exponents).

Let’s say, the robot moves from x0 to x1 to x2 which are 5 and -4 units apart.

Image by Author: Omega Matrix and Xi vector after 2 movements

The coefficient of x’s and RHS values are added to corresponding cells. Consider the landmark L0 is at a distance of 9 units from x1.

Image by Author: Omega Matrix and Xi vector after considering landmark, L1

Once the Ω matrix and ξ vector is filled in like above, compute the equation below to get the best estimates of all the robot locations:

To estimate robot position

Custom Implementation: Graph SLAM

You need to update the values in the 2D Ω matrix and ξ vector, to account for motion and measurement constraints in the x and y directions.

The complete source code and results of Custom SLAM implementation can be found in the IPython notebook here.

We can see SLAM in action, if deployed on a RaspberryPi with RPLidar A1 M8, running on a 5V 3A battery. See the assembled gadget below.

Image by Author: SLAM Assembled Device

As you can see in the video at the top, the portable unit is taken across various rooms of my house and a real-time trajectory is transmitted to the MQTT server and also stored on the SD card of RPi.

You can see the visualization of the LIDAR point cloud map and estimated robot trajectory using PyRoboViz as below. In the above video, you can see me going across different rooms on the floor.

Interestingly, we can re-route the real-time visualization to a remote machine using MQTT for real-time visualization. The robot position, angle, and map can be encoded as a byte array that is decoded by the MQTT client.

Note that, the high-dense linear point cloud in the LIDAR maps represents stable obstacles like walls. Hence, we can use algorithms like Hough Transform to find the best fit line on these linear point clouds to generate floor maps.

From 3D LIDAR point cloud, we can even construct a 3D map of surroundings using Structure from Motion techniques,

  • Use Detectors such as SIFT, SURF, ORB, Harris to find features like corners, gradients, edges, etc.
  • Use Descriptors such as HOG to encode these features.
  • Use Matchers such as FLANN to map features across images.
  • Use 3D Triangulation to reconstruct a 3D Point Cloud.

We can use the idea of SLAM indoor navigation to deploy an autonomous mobile robot inside closed environments like airports, warehouses, or industrial plants.

The source code of this solution is available here

If you have any queries or suggestions, you can reach me here

References

1. Breezy Implementation of SLAM: https://github.com/simondlevy/BreezySLAM

Note: Have contacted the author of Breezy SLAM, Prof. Simon D. Levy, CSE Dept., Washington and Lee University for LIDAR scan data error, during visualization. I have fixed the code by separating lidar scan code as a separate thread and implemented inter-thread communication and he acknowledged the fix.

2. LIDAR Point Cloud Visualization: https://github.com/simondlevy/PyRoboViz

3. Udacity Computer Vision Nanodegree: https://www.udacity.com/course/computer-vision-nanodegree--nd891

4. LIDAR data scan code stub by Adafruit: https://learn.adafruit.com/remote-iot-environmental-sensor/code

5. LIDAR Distance Estimation: https://en.wikipedia.org/wiki/Lidar

6. RPLIDAR A1 M8 Hardware Specification: https://www.generationrobots.com/media/rplidar-a1m8-360-degree-laser-scanner-development-kit-datasheet-1.pdf

--

--

AI / ML R&D Specialist, KLA-Tencor | CSE, IIT B Alumnus | Ex-Vice President — AI & DS, IIA | Ex-TSMC | Reached interview twice for IAS | AI Consultant & Mentor