Cellular Automata and Driverless Cars

Self organizing networks, IoT, Machine Learning, and Trains

Robert Rennie
Towards Data Science

--

Introduction

This article started with a simple thought experiment: If all cars were driverless, will we need traffic lights?

And in this case I’m speaking of specifically driverless cars — as opposed to self-driving cars which still require a human at the wheel. Perhaps a semantic difference, but for the rest of this article they will be referred to as driverless cars.

It is clear from the man-years of effort and billions of dollars of R&D being invested by every major technology and automobile manufacturer that the goal is ultimately to remove humans from the driving equation. So, when we do finally get there, a few perplexing questions come up:

  1. Won’t a system of 100% driverless cars essentially be a next-generation train (as in good ol’ locomotive type train) network? Albeit the tracks are now roads and trains are cars networked together in real-time.
  2. What will happen to the sophisticated interim technologies such as machine learning and deep neural networks which were trained to drive at the level of humans (bad idea!) to interact with actual humans driving (i.e. during the “transition period” which still allows for human drivers)? Once all cars are driverless and networked, we won’t need all that — we’ll just need to make sure the cars stay on the track and don’t bump one another. So what are we doing?

Trains, the First Driverless Cars

Let’s get in a time machine and jump back in time and imagine if instead of free wheeling cars, track-based transportation took off, and some sort of track technology formed what we now know as our local streets and subsequently, towns and cities. This easily could have happened — Disney envisioned the future of transportation as monorails. It may not have been an actual rail, it could have been a drive-by-wire technology as well. All of our streets could have been “laid” with simple wiring, or street paint could have had some electromagnetic quality. We’d have all the benefits that the self-driving car pundits are claiming — reduced accidents, better use of driver (now passenger) time, etc.

But, alas, that did not happen! And now we have out-of-control teenagers driving 100 mph while drinking a beer and texting their buddies. So, let’s get back in the time machine and move ahead in time, following the evolution of cars to 100% driverless (no humans driving at all) and imagine the technology that would be used or required (and perhaps more interestingly, what may not be used) to get there.

What if train hitches, the connectors that keep train cars from hitting one another and also keep them optimally spaced, were replaced with some sort of self organizing algorithm that ran within and among each and all driverless cars. Each car could communicate with all other cars in its vicinity and they could come to an organized consensus on how to proceed optimally. Each could act with a cellular automata inspired algorithm, for example, as will be described later. Today, we already have cars that “sense” when they’re too close to another car as well as even parallel park themselves. These can be viewed as an early version of these algorithms, providing a virtual replacement to train hitches.

Next, to augment these “computer senses” (which will be described in detail next) we see the ultimate fruition of the Internet of Things (IoT). The street is laid with technology that can be sensed, and everything involved in transport (people, obstacles, etc.) is also imbued with an IoT device that can be sensed.

Computer Senses

Computers don’t see like humans. It’s the biggest mistake in AI today. Isn’t it rather strange that we spend so much time getting computers to act like humans — to hear speech, to see faces — yet we don’t realize that these are not computer senses? Not to get too far out there into TRON or The Matrix territory, but computers don’t naturally sense like we do.

For example, when 5G is ubiquitous, a group of a dozen driverless cars approaching an intersection can instantaneously network with one another and determine how they can all navigate the intersection efficiently. A group of 12 humans cannot network each others’ thoughts in real time. Humans don’t have the WiFi “sense”, computers do.

So, in this one simple example, a natural computer “sense” — namely the ability to “see” other computers on a network and exchange algorithmic data instantly is vastly more useful in a driverless car scenario then a neural network that has been trained to recognize objects. We are so focused on replacing humans with machines with human senses, that we don’t realize machines have their own vastly superior yet different senses.

What this means in the world of driverless cars is that everything must have a digital representation if it is to be “seen” by a driverless car. Instead of teaching a neural net to recognize a person walking across the street, that person must have some device that allows the driverless car to see them using their computer senses.

Sound crazy? Well, dogs already have chips embedded in them and nearly every human on the planet has a smartphone in their pockets (or on our wrists). We’re almost already there. Wearables, implantables, they are all in our future. What we are doing is creating digital representations of ourselves so computers can see us.

Let’s Start With Ants

Have you ever looked at traffic from above in real-time? Maybe from a traffic helicopter shot on the local news or from an airplane when you’re about to land. Each little car looks a lot like an ant and the whole system looks a lot like something you’d see from a macroscopic level in nature. You don’t notice each individual car, or ant, but as a whole they move with fluid dynamics. The fact that a car is being driven by perhaps the most intelligent of beings in the universe is completely lost as from above, it just appears to be a molecule of water flowing in a stream. A completely dumb molecule. (Granted, some drivers appear this way close up too).

There are number of ironies here:

  1. First, the machine learning of today cannot accurately reproduce an ant brain — yet we’re starting with reproducing human brains to drive our cars? How about we get ant simulation right first?
  2. Ants have different senses than humans. Maybe we can’t simulate ants because we aren’t simulating their senses.
  3. Why are we spending so much time and effort to recreate an intelligent being driving in one car when we can easily reproduce the macroscopic characteristics of traffic flow from above, and have been able to do so with traditional algorithms for years?
  4. By the way, why are we training neural networks to drive like humans when humans suck at driving?

Cellular Automata

A full history of cellular automata, or cellular automation (CA), is beyond the scope of this article but some brief elements of its past will be discussed. In addition, the computer simulation that was built in support of this article does not use a traditional CA rule or grid, but does borrow the basic concepts of CA.

The basic idea of CA is to create a grid (traditionally 1 or 2 dimensions) of cells wherein each cell can either be “on” or “off”. There is a rule that each cell uses to determine whether it should be on or off — such as “I am off when I have more than 3 neighbor cells on otherwise I’m on”. The CA algorithm is run in iterations (i.e. generations) and cells turn on and off according to the rule.

When humans view the output of these algorithms, our mind’s eye sees organic behavior — it looks like nature, like the view of traffic from the helicopter. We know it’s not actually organic behavior, its a computer simulation, but our brains see it as organic. The most referred to example of CA is Conway’s “Game of Life” which is described and demonstrated sufficiently in this Wikipedia article. A quick look at this example and you will see the organic nature of its output. While the rule governing the life of a cell (whether it’s on or off) is exceedingly simple, the resulting behavior is utterly complex that would be nearly impossible to code any other way.

Stephen Wolfram wrote a tome on the subject in 2002 in a book boldly entitled “A New Kind of Science” (see References). In its 1,197 pages, Wolfram analyzes numerous rules and their resulting output over many iterations. He proposes and the implements a new science which focuses on the emergent behavior of a system of CA leveraging simple rules. A new kind of science that studies the complex that remarkably emerges from the simple.

The Next (or Real) AI — Artificial Life

Artificial Intelligence must progress beyond simply attempting to act human. With enough vector matching, which is really all neural networks are, we will be able to recognize every face on the planet. This is not intelligence — for a given human cannot recognize every face on the planet. This is simply massive regression algorithms at work.

In Erwin Schrodinger’s famous book “What is Life”, he discusses why things are “big” yet built of really tiny other things like atoms. More importantly, why randomness at the tiny level (e.g. Brownian motion) begets “big” things that appear to be relatively stable. (Spoiler alert, that’s why things are big, to create macroscopic stability from microscopic instability.)

Our brains are a collection of 100 billion neurons each acting more like a CA algorithm than an machine learning algorithm. Somehow, the brain manages to harness this mass of tiny elements into a coherent whole — a set of small things that begets a “big” intelligence.

One could also argue that machine learning algorithms are similar to CA in that they are defined at the simulated “neuron” level along with simulated layers such as in a backward error propagation algorithm (or other gradient descent algorithm). However, there’s an overarching controlling algorithm that is generally not parallel, and is actually focused on simple vector matching, not harnessing unexpected emergent behavior.

Artificial life would be accomplished by reproducing something that from above, looks organic. In this way, Conway’s “Game of Life” is vastly more life-like than a neural network. CA’s are also massively parallel algorithms and are vastly simpler from a computational perspective than machine learning. Artificial life would put Artificial Intelligence in its proper place — namely in translating human senses into computer senses more accurately during the transition period to 100% driverless cars.

The Code Sample

A simple software simulation accompanies this article (download here from GitHub) to put a little concreteness around the concepts presented herein. A screenshot of the program’s output is shown at the beginning of this article.

The goal of the simulation is simple:

  1. Simulate a system of cars moving in all directions through a grid of streets and intersections.
  2. Give each car a simple rule that governs its movement in relationship to other cars.
  3. Cars can talk to one another, but there’s no overarching algorithm controlling their movement.
  4. Make sure no cars crash into one another.
  5. Make sure nothing gets “stuck”.

In short, this simulation is the result of the original question posed at the start of the article: “If all cars were driverless, would we need traffic lights”? The simulation demonstrates the answer is clearly, no.

When the project is run, and the “Start” button is clicked, cars begin appearing at the edges of the grid. As time goes by, more cars appear and all must negotiate with one another not to crash, especially at intersections. After watching for a few seconds, one sees the organic nature of the behavior of the cars, similar to when one looks at the “Game of Life”.

Even more interesting, when various rules for these cars were attempted, things got stuck, a lot. It was very easy to create a situation where four cars would come to an intersection at the same time and just get stuck because their rules created a stalemate. In fact “Car1” in the source code is just such a simple car. Its rule was “if the next cell in my direction is clear, take it”. As simple as this rule is, it does work relatively well with few cars on the road.

“Car4” however works as desired (Car2 and Car3 were failed attempts not worth including). Cars do not get stuck and the model runs forever, showing its organic emergent behavior side. Its rule is slightly more complex, but the key was creating the concept of courteousness — where every so often (randomly), a car that could go through an intersection decides to wait and generously let others through.

The “courteousness” factor is an algorithmic life lesson!

Summary

When viewing the simulation provided in this article, it is interesting to consider that simple rules applied to independent cars that work together as a group creates artificial life that rivals what can be done today with the most sophisticated algorithms and billions of dollars of research into driverless cars — that still don’t work. This bears repeating, a simple model that took no more than a few days to put together already rivals the most complex self-driving cars’ AI, when viewed from above.

It may be hard to imagine how we would “lay the tracks” of the future for all driverless cars and how all entities that must be “seen” by a driverless car must have some digital representation compatible with a computer’s senses. But with IoT, smartphones, watches, wearables, etc. we’re already on exactly such a path. At the very least, self-driving cars in the interim will need a redundancy mechanism (currently a human driver who takes control when the neural net goes sideways) for the foreseeable future. Maybe we should be laying some track now to provide just such redundancy for the future of 100% driverless cars, or trains as they used to be called.

Code Details:

The code is a React.js project, for no other reason than it makes it simple to do user interface (UI) updates in a Web browser so focus can be spent on the algorithms as opposed to writing UI code. To install and use, simply install Node.js, git pull the auto-car repository, and then run ‘npm install’ and then ‘npm start’ from a command prompt in the auto-car project directory.

  • /src: Contains boilerplate React.js scaffolding.
  • /src/components: Contains the user interface control (JSX) which renders the grid and cars.
  • /src/modules/AutoSimModel.js: Houses the actual grid. The next() method runs a generation.
  • /src/modules/cars: This directory contains the car objects (and their base class). Note that Car4 is the one that works best as it has some altruistic characteristics. Car1 creates cars that get stuck quite quickly when cars are added rapidly, but still works ok if cars are added slowly.

References/Bibliography:

  • Conway, J.H. (1970). Game of Life. Retrieved from https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life
  • Schrödinger, E., & Penrose, R. (1992). What is Life?: With Mind and Matter and Autobiographical Sketches (Canto). Cambridge: Cambridge University Press. doi:10.1017/CBO9781139644129
  • Wolfram, Stephen. (2002). A New Kind of Science. Wolfram Research. Champaign, IL.
  • Xianjuan Kong. (2007). Research on Modelling and Characteristics Analysis of Traffic Flow Based on Cellular Automaton. Beijing Jiaotong University, 2007.

Updates:

I’m adding this section to capture relevant news articles or other updates related to and/or supporting the topics discussed herein to keep this article more of a “living” document.

  • April 22, 2019: I was just sent an article describing something called “V2X”. (see https://en.wikipedia.org/wiki/Vehicle-to-everything). I really wished I had come across this when first writing the article. This technology matches up exactly with my thoughts on how computers see like computers, not like humans, and how all things must be wired (even pedestrians and cyclists). Always nice to know other minds are not so crazy too, or perhaps just as!
  • January 5, 2019: A Wall Street Journal article titled “The Grocery Robot Is Here” appeared in today’s issue and discusses autonomous vehicles being tested to deliver groceries. Two key quotes are “unless cities create special lanes for self-driving wagons, they might become a hassle for pedestrians”. Special lanes? Sounds a lot like a “track” no? Also, “almost all of these robots need minders of one sort or another”. Here’s the article.

--

--

Rob has been programming for nearly 40 years starting as a child programming basic on a TI-99/4A. He has held C-level and technical positions for decades.