PODCAST

Scary Smart: A former Google exec’s perspective on AI risk

Mo Gawdat on AGI, its potential and its safety risks

Jeremie Harris
Towards Data Science
4 min readJan 26, 2022

--

APPLE | GOOGLE | SPOTIFY | OTHERS

Editor’s note: The TDS Podcast is hosted by Jeremie Harris, who is the co-founder of Mercurius, an AI safety startup. Every week, Jeremie chats with researchers and business leaders at the forefront of the field to unpack the most pressing questions around data science, machine learning, and AI.

If you were scrolling through your newsfeed in late September 2021, you may have caught this splashy headline from The Times of London that read, “Can this man save the world from artificial intelligence?”

The man in question was Mo Gawdat, an entrepreneur and senior tech executive who spent several years as the Chief Business Officer at GoogleX (now called X Development), Google’s semi-secret research facility, that experiments with moonshot projects like self-driving cars, flying vehicles, and geothermal energy. At X, Mo was exposed to the absolute cutting edge of many fields — one of which was AI. His experience seeing AI systems learn and interact with the world raised red flags for him — hints of the potentially disastrous failure modes of the AI systems we might just end up with if we don’t get our act together now.

Mo writes about his experience as an insider at one of the world’s most secretive research labs and how it led him to worry about AI risk, but also about AI’s promise and potential in his new book, Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World. He joined me to talk about just that on this episode of the TDS podcast.

Here were some of my favourite take-homes from the conversation:

  • Over the last several decades, progress in AI has been exponential (or more than exponential if you measure it based on compute curves). Humans are really bad at extrapolating exponential trends, and that can lead to our being taken by surprise. And that’s partly because exponential progress can change the world so much and so fast that predictions are next to impossible to make. Powered by exponential dynamics, a single COVID case turns into a nation-wide lockdown within weeks, and a once-cute and ignorable tool like AI becomes a revolutionary technology whose development could shape the very future of the universe.
  • One of the core drivers behind the exponential progress of AI has been an economic feedback loop: companies have learned that they can reliably invest money in AI research, and get a positive return on their investment. Many choose to plough those returns back into AI, which amplifies AI capabilities further, leading to a virtuous cycle. Recent scaling trends seem to suggest that AI has reached a kind of economic escape velocity, where returns on a marginal dollar invested in AI research are significant enough that tech executives can’t ignore them anymore — all of which makes AGI inevitable, in Mo’s opinion.
  • Whether AGI is developed by 2029, as Ray Kurzweil has predicted, or somewhat later as this great post by Open Philanthropy argues, doesn’t really matter. One way or another, artificial human-level or general intelligence (definitions are fuzzy!) seems poised to emerge by the end of the century. Mo thinks that the fact that AI safety and AI policy aren’t our single greatest priorities as a species is a huge mistake. And on that much, I certainly agree with him.
  • Mo doesn’t believe that the AI control problem (sometimes known as the alignment problem) can be solved. He considers it impossible that organisms orders of magnitude less intelligent than AI systems would be able to exert any meaningful control over them.
  • His solution is unusual: humans, he argues, need to change their online behaviour, and approach one another with more tolerance and civility on social media. The idea behind this strategy is to hope that as AI systems are trained on human-generated social media content, they will learn to mimic more virtuous behaviours, and pose less of a threat to us. I’m admittedly skeptical of this view, because I don’t see how it addresses some of the core features of AI systems that make alignment so hard (for example, power-seeking and instrumental convergence, or the challenge of objective specification). That said, I think there’s a lot of room for a broader conversation about AI safety, and I’m glad Mo is shining a light on this important problem.

You can follow Mo on Twitter here, or me here.

Chapters:

  • 0:00 Intro
  • 2:00 Mo’s background
  • 7:45 GoogleX projects
  • 14:20 Return on investment
  • 21:40 Not creating another machine
  • 28:00 AI as an embedded agent
  • 41:35 Changing human behaviour
  • 53:35 Goals and power seeking
  • 58:45 Wrap-up

--

--

Co-founder of Gladstone AI 🤖 an AI safety company. Author of Quantum Mechanics Made Me Do It (preorder: shorturl.at/jtMN0).