PODCAST

Pointing AI in the right direction

A cross-over episode with the Banana Data podcast!

Jeremie Harris
Towards Data Science
3 min readJun 23, 2021

--

APPLE | GOOGLE | SPOTIFY | OTHERS

Editor’s note: This episode is part of our podcast series on emerging problems in data science and machine learning, hosted by Jeremie Harris. Apart from hosting the podcast, Jeremie helps run a data science mentorship startup called SharpestMinds.

This special episode of the Towards Data Science podcast is a cross-over with our friends over at the Banana Data podcast. We’ll be zooming out and talking about some of the most important current challenges AI creates for humanity, and some of the likely future directions the technology might take.

Here were some of my favourite take-homes:

  • Humans are offloading more and more of their thinking and decision-making to machines, and we’re doing this faster and faster, as the capabilities of AI systems increase. But with decision-making comes accountability: if an AI system is charged with making critical decisions that go wrong, who is to blame? The question of accountability in the context of AI-assisted (or AI-led) decision-making is one that we still haven’t thought through as a society.
  • Accountability in the context of AI-powered decision-making is also a moving target. We can’t come up with a set of static rules about who’s responsible when AI-enabled decisions go bad, because the capabilities of AI systems are constantly increasing, and AIs are being applied to new problem classes all the time. It might make sense to think of accountability not as a set of rules, but as a dynamic process and set of principles that can be more robust to changes in the scope of AI decision-making.
  • Drones are a great current example of the AI accountability problem: the world’s first fully automated drone attack recently took place in Lybia, and while details are still sparse, it seems that the drone identified its targets and made the decision to engage them without human intervention, in a break with human-in-the-loop paradigms that many international governments have backed. If a drone attacks the wrong target, who takes the blame? The drone manufacturer? The algorithm designer? The country that deployed the drone? Or even the drone itself? These questions aren’t easy to think through.
  • There’s a principle called Goodhart’s Law that lies at the core of the challenge of safely developing AI systems. Goodhart’s Law says that as soon as you define a metric that you want to use to measure the performance of a system, people (and potentially AIs!) will start to game that metric, and it will cease to be a reliable indicator of the performance of the system you’re trying to optimize.
  • For example: back in the 1920s, the US stock market was a pretty reliable indicator of overall US economic health. But as more attention has been focused on that metric, governments and central banks have found ways to rig it — for example, by printing money that ultimately pumps up stock prices in a way that decouples them from the broader economy. Companies run into Goodhart’s Law all the time: if they define a metric they want to optimize and they’re not careful about keeping the big picture purpose of that metric in mind, they can end up focusing myopically on that metric in ways that are counterproductive.
  • We’ve talked about AI alignment a fair bit on the podcast before: it’s the surprisingly challenging problem of aligning AIs with human values. Most people think of the alignment problem as something that will become important only once we build machines that are more generally intelligent than we are, but in reality, we’re arguably running into it already. For example, Twitter’s recommender system is nominally designed to optimize for user engagement, but some have wondered whether it’s learned to hack this metric by feeding users content that’s more politically polarizing.

You can check out the Banana Data podcast here, or follow me on Twitter here.

Chapters:

  • 0:00 Intro
  • 3:40 Accountability in the age of AI
  • 9:00 Goodhart’s law
  • 12:35 AI safety challenges
  • 18:05 The difference between safety and transparency
  • 24:45 Concept of global alignment
  • 29:00 Automation within different industries
  • 33:10 AI safety and crisis management
  • 34:20 Wrap-up

--

--

Co-founder of Gladstone AI 🤖 an AI safety company. Author of Quantum Mechanics Made Me Do It (preorder: shorturl.at/jtMN0).