The world’s leading publication for data science, AI, and ML professionals.

Self Driving Car – Localization

How does a self driving car know where it is at any given time?

Image by Pixabay
Image by Pixabay

One of the projects of artificial intelligence which has always fascinated me is the self-driving car. It is my major motivation behind learning more about deep learning and artificial intelligence.

One of the most vital things that a self-driving car has to accomplish to perform its other functions is Localization. Not only self-driving cars, but also moving robots have to first predict where it is. All other function of self-driving car is dependent on its true location. In this article, we will try to find out how to predict the position of a car using Monte Carlo Localization.

One way to know the location of an object or a car is by using GPS. By turning on GPS service in our smartphones we can easily see where we are in the world with the help of satellites. But there is a problem in using GPS as a localization tool. In GPS there is an uncertainty of nearly 10 meters. If a self driving car has that big of an error margin, it would most likely crash to a building on the roadside, thinking it’s a part of the road. So, self driving car uses camera, laser, infrared sensors to identify its location on a known roadmap

For the sake of understanding, let’s consider a car that only performs a 1D motion and its world consists of red and green blocks.

Figure 1: World consists of blocks of the green and red color [image by author]
Figure 1: World consists of blocks of the green and red color [image by author]

At the beginning when the car has not taken any data from the environment with the sensor, it has no idea where it is. So the level of uncertainty is highest and it does not know where it is. If we try to graph the Probability distribution of where it is, it will look like this:

Figure 2: Probability of the car to be in block i (1,2,...,6) [image by author]
Figure 2: Probability of the car to be in block i (1,2,…,6) [image by author]

Here, we can see that every location is equally likely with a probability of 0.167. That means the car can be in any of the 6 blocks.

Now let’s assume that the sensors have made a measurement that it senses green color with 80% certainty (measurement may be wrong sometimes due to sensor error). Then you will now believe that your car is in front of one of those green blocks. So, we multiply 0.8(certainty) with prior probabilities of the blocks which match our measurement. Where the block color does not match the measurement color we multiply the prior with 0.2. After normalizing the probability distribution looks like this:

Figure 3:Probability of the car to be in block i (1,2,...,6) [image by author]
Figure 3:Probability of the car to be in block i (1,2,…,6) [image by author]

After the measurement, your belief changed and now you believe that you have a high probability( 0.2672) of being in front of a green block than a red block. The red blocks still have some probability (0.0668) left. This is because your sensor was not 100% accurate. So there is a small chance that your sensor made a mistake and you are actually in front of a red block. These small probabilities are represented in the graph. If the sensor was 100% accurate, you would have absolute certainty that you are in front of a green block and the probability of red blocked would have been 0.

The above-mentioned calculations are based on Bayes Rule.

Before sensing the color we had a prior probability distribution P(Xi) → Probability of each block which is represented in the chart. After we took a measurement, we sensed the color with a level of certainty. P(Z | Xi)=0.8 → If we were in the block Xi, the probability of sensing the correct color is 0.8, which is constant for all the blocks having color matching with the measured color. And for the blocks having a different color, its value is 0.2 (may vary for different sensors). So the posterior probability after making a measurement is P(Xi | Z)= probability of being in position Xi after a measurement Z (sensing a color ) has been made → P(Xi)* P(Z | Xi)

Which is exactly what we did. We normalized the values afterward to make the total probability equal to 1.

Now think about motion. Let’s assume it’s a cyclic world. If a car moves from the 6th block to right, it lands on the first block. The car is moving with the help of moving mechanical parts, which also has uncertainties. Let’s assume that the car moves one block at a time and it has a movement certainty of 0.9. It means, If it tries to move one block, there is a 90% chance that it actually moved the desired one block and a 10% chance that it never moved and stayed at its current location.

If the motion was 100% accurate, moving in a specific direction would mean the probability distribution would also move one block in that direction.

Figure 4: Probability of the car to be in block i (1,2,...,6) [image by author]
Figure 4: Probability of the car to be in block i (1,2,…,6) [image by author]

Here we assumed that the world is circular. So when we moved with absolute certainty, Our probability distribution shifted in that direction. The reason is if at present a car is highly likely to be found in the 4th block, after a certain movement to the right, it is highly likely to be found in the 5th block. So the probability distribution shifts according to motion.

But if we don’t have a 100% certain motion, then we would have to use total probability to predict the final probability distribution after a motion. After a motion, the probability distribution of a block may come from its previous block with a certainty of 0.9, The present distribution may also be unchanged with certainty of 0.1 if the car hasn’t moved at all. So the new probability distribution is the weighted sum of these two.

P(Xi)= P(Xi-1)P(motion)+P(Xi) P(Stay)

So if we move with a 0.9 probability then the probability distribution would have been:

Figure 5: Probability of the car to be in block i (1,2,...,6) [image by author]
Figure 5: Probability of the car to be in block i (1,2,…,6) [image by author]

Here we can see that the distribution is more distributed now. We can easily notice that after every measurement the level of uncertainty decreases, but after a motion, the level of uncertainty increases or remains the same.

Now let’s take multiple measurements, and measure whether we can actually identify our position in this method.

Figure 6: 1D world comprising green and red blocks [image by author]
Figure 6: 1D world comprising green and red blocks [image by author]

Let us think from the point of view of a Self-driving car.

Action:

  1. It senses a green block
  2. It moves 1 block right
  3. It senses Red
  4. It moves 1 block
  5. It senses Red
  6. It does Not move
  7. It senses Red
  8. It moves 1 block right
  9. It senses green
  10. It moves 1 block

Now tell me, according to the previous action, where the car might be right now?

Yes, It should be on the 5th block now. Let’s see what the calculation shows.

According to action, measurement vector=[‘green,’red’,’red’,’red,’green’]

Motion vector=[1,1,0,1,1]

I have written a code(links given below), where if you put these vectors, it will give you the probability distribution.

Code link: https://github.com/ishtiakm/self_driving_car/blob/master/senseandmov1D.py

Figure 7: The probability of the car being in each of these blocks
Figure 7: The probability of the car being in each of these blocks

The probability distribution is maximum at the 5th block.

Figure 8: Probability of the car to be in block i (1,2,...,6) [image by author]
Figure 8: Probability of the car to be in block i (1,2,…,6) [image by author]

After multiple measurements, the car is much certain about its position and we can see that its prediction matches with ours.

The above-mentioned method is actually the core algorithm for a self driving car to identify its location in the real world. In reality, the world is not only green or red, and the motion is not one dimensional. Real world introduces more noise and more adverse situations like rain, snow, etc. But the algorithm we learned here remains the same. The car already knows its ‘world’ list (see the code) from google earth. Its camera takes images and uses it as measurements. After matching these two, it updates its probability distribution and understands where it is and whether it is on the right side of the lane.

This logic can also be implemented in a 2D world having green and red blocks. Try to implement it by editing the 1D functions of the given code for 1D. The link to the code for the 2D world is also given below. But try to implement it yourself and play around with it. I assure you it’s pretty amazing.

Code link: https://github.com/ishtiakm/self_driving_car/blob/master/senseandmov1D.py


Related Articles