PODCAST

Self-Driving Cars: Past, Present and Future

Peter Gao on the challenges and innovation at the heart of autonomous driving

Jeremie Harris
Towards Data Science
4 min readJul 7, 2021

--

APPLE | GOOGLE | SPOTIFY | OTHERS

Editor’s note: This episode is part of our podcast series on emerging problems in data science and machine learning, hosted by Jeremie Harris. Apart from hosting the podcast, Jeremie helps run a data science mentorship startup called SharpestMinds.

Cruise is a self-driving car startup founded in 2013 — at a time when most people thought of self-driving cars as the stuff of science fiction. And yet, just three years later, the company was acquired by GM for over a billion dollars, having shown itself to be a genuine player in the race to make autonomous driving a reality. Along the way, the company has had to navigate and adapt to a rapidly changing technological landscape, mixing and matching old ideas from robotics and software engineering with cutting edge techniques like deep learning.

My guest for this episode of the podcast was one of Cruise’s earliest employees. Peter Gao is a machine learning specialist with deep experience in the self-driving car industry, and is also the co-founder of Aquarium Learning, a Y Combinator-backed startup that specializes in improving the performance of machine learning models by fixing problems with the data they’re trained on. We discussed Peter’s experiences in the self-driving car industry, including the innovations that have spun out of self-driving car tech, as well as some of the technical and ethical challenges that need to be overcome to make self-driving cars hit mainstream use around the world.

Here were some of my favourite take-homes:

  • The history of self-driving cars goes back much further than most people think. As early as the mid-1900s, the first proposals for driverless transportation were developed, but given the state of technology at the time, the only realistic way to achieve it were to heavily constrain the problem. Specially built tracks, magnets installed under roads to guide vehicles, and other custom infrastructure were required, even in theory. But as time went on and technology improved, constraints could be relaxed: by the 1990s, rudimentary computer vision algorithms allowed self-driving cars to perform reasonably well on the highway. But these more modern techniques required that the automated vehicle be filled with server racks, and weren’t adaptable enough for city driving. It’s only with the advent of deep learning that both the perception and planning capabilities required for everyday use have gotten good enough for mainstream use: thanks to computer vision, cars can now interpret the infrastructure around them, rather than needing it to be built with them in mind. Still, even modern self-driving cars are a Frankenstein monster of deep learning, classical algorithms for 3-D geometric reconstruction, hard-coded rule structures, and robotics.
  • The main bottleneck to building fully autonomous self-driving cars has turned out to be the out-of-distribution sampling problem, an issue that arises when a car runs into a scenario that it didn’t encounter during training. For example, Peter cites the challenge of identifying pedestrians wearing elaborate costumes on Halloween — if a model hasn’t encountered a person in a morph suit during training, it’s less likely to correctly classify them as something to be avoided. This sampling problem essentially makes self-driving car tech developed in San Francisco dangerous to use in cities with different street sizes and road conditions, like Phoenix or Montreal, which is why the rollout of self-driving car tech is likely to proceed on a city-by-city basis. Each new environment is a fundamentally new problem.
  • Full autonomy isn’t actually necessary for a lot of commercially valuable applications. Some problem settings are naturally more constrained than others — and Peter cites dishwashers as an example: they’re technically an autonomous application of robotics, but that’s only made possible by the fact that they’re set up in a carefully constrained environment. Less constrained than dishwashers, but more constrained than the problem of everyday driving, are problems like moving cargo along a dockyard, or using self-directed drones to inspect power lines. While these applications may not always look like autonomous cars, they’re offshoots of the same tech that makes self-driving cars possible.
  • As we hand over more of our decision-making to machines, we start to face some difficult moral questions — and nowhere are these questions more thorny than in self-driving car tech. Who, or what is to blame when a crash happens? The answer will of course be situation-dependent. Since self-driving cars normally have some human oversight — in the form of operators responsible for multiple vehicles, who can intervene in cases of ambiguity — so there will be cases in which operator negligence might be a factor. Still, there are scenarios in which an AI-powered decision leads to bad outcomes. When that happens, does responsibility rest with the company that built the AI, the company that deployed it, the individuals responsible for algorithm development, or for curating the data the car was trained on?
  • Peter highlights that quite often, problems with AI performance actually come from problems with training data, rather than algorithm architecture. He’s encountered a relatively consistent set of problems, and worked on solving them using surprisingly generalizable techniques.

You can follow Peter on Twitter here, or follow me on Twitter here.

Chapters:

  • 0:00 Intro
  • 1:45 Peter’s background
  • 4:15 Early projects
  • 8:00 How perception works for self-driving cars
  • 18:30 Main constraints
  • 22:50 Timeline of self-driving car tech
  • 26:40 Exciting applications of self-driving car tech
  • 34:50 Automating other fields
  • 42:35 Reasoning through accidents and mistakes
  • 47:10 Most common challenges among datasets
  • 56:00 Different kinds of errors and how to handle them
  • 1:00:10 Wrap-up

--

--

Co-founder of Gladstone AI 🤖 an AI safety company. Author of Quantum Mechanics Made Me Do It (preorder: shorturl.at/jtMN0).