Data-Driven Leadership and Careers

Forget the robots! Here’s how AI will get you

The real reason AI is more dangerous than traditional software

Cassie Kozyrkov
Towards Data Science
10 min readSep 13, 2019

--

Here’s the audio version of the article, read for you by the author.

AI ethics is a hot topic these days, so you see all kinds of rhetoric zooming around. Complaints range from “the robots took my job” to “your computer system is just as biased as you are (you jerk).”

Why aren’t we talking about what makes ML/AI uniquely more dangerous than other technologies?

The topics that come up in connection with AI ethics are vital, timely, and necessary. Let’s keep discussing them! I just wish we wouldn’t use the term AI ethics whenever it… isn’t even about AI.

When AI ethics discussions miss the point

Not to pick on the World Economic Forum (much love, guys), but I find this WEF article on the Top 9 ethical issues in artificial intelligence convenient fodder for a quick exercise.

Many AI ethics talking points aren’t specific to AI. They’re about technology in general and they’re nothing new.

Take their section headings, replace {“AI”, “robots”, “machines”, “intelligent system”, “artificial”} with “technology” and see if we break anything.

You can find the full article here.

How many of their “Top 9 issues” are specific to AI?

  1. What happens after the end of jobs?
  2. How do we distribute the wealth created by technology?
  3. How does technology affect our behavior and interaction?
  4. How can we guard against mistakes?
  5. How do we eliminate technology bias?
  6. How do we keep technology safe from adversaries?
  7. How do we protect against unintended consequences?
  8. How do we stay in control of complex technology?
  9. How do we define the humane treatment of technology?

Ethics issues 1–8 are relevant to technology in general and certainly to traditional software at scale. Using AI to get the public interested in them reminds me of geologists using pet rocks as teaching aids. It’s all in good fun until the geology lesson turns into pet rock psychology (issue 9).

Paint a face on something and before long you’re having conversations with it. That’s a malfunction with how our species is wired. Just because something about it reminds you of you doesn’t mean it’s got a brain.

Using AI to get people to think about ethics in technology reminds me of geologists using pet rocks as teaching aids. It’s all in good fun until the geology lesson turns into pet rock psychology.

If you want to perpetuate unfair treatment of people, launch ineffective solutions, disrupt labor markets, change how people interact with one another, release things with unintended consequences that fall into the wrong hands, and create a complex system you can’t get rid of, you can do it all without ML/AI. (Please don’t.) You can also have a productive discussion about what all these things mean for our world without ever invoking big data or neural networks.

The singularity can wait (for its Nebula Award).

So, is there an issue that’s specific to ML/AI? Of course! Is it The Singularity? Let’s not get ahead of ourselves. The singularity can wait (for its Nebula Award). There’s a much more urgent candidate that boils down to what today’s ML/AI actually is

This is just another pet rock. The mimicry is slightly better, sure, but it’s still a lifeless object with a face painted on it.

AI versus the robots

What’s great about AI? It lets you automate the ineffable! You can use patterns in data instead of having to meditate your way to a solution. Do you realize how powerful this is? It means that even if you can’t come up with instructions, you might be able to automate your task anyway. What more could you want? Personhood? Replacement humans? Singularities? Stop. AI isn’t about that. Marketing it as chrome-plated humanoids takes advantage of the public’s ignorance… and distracts you from the real danger.

Robots are just another kind of pet rock. Go on, put googly eye stickers on your vacuum cleaner, I know you want to.

If we use our mental energy worrying about the wrong things, we’ll miss the parts that we really ought to worry about. You shouldn’t let poets lie to you.

Neural networks aren’t brains.

The way the term AI is used today isn’t about developing replacement human-like entities with personhood. (The better term for that is HLI.) It’s a set of tools for writing software a different way, letting you program with examples (data) instead of explicit instructions.

“AI is a set of tools for programming with examples (data) instead of explicit instructions.”

Searching for the promise of AI? How about the peril of AI? They’re both right there in that last quote. Look closer…

Levels of distraction

Imagine that you want to automate a task that takes 10,000 steps. In traditional programming, a person must sweat over each of those little instructions.

In traditional programming, every part of the solution is explicitly handcrafted by human hands.

Think of it as 10K LEGO pieces that need arranging by human hands. Since developers are blessedly impatient, they’ll package up some parts so they don’t need to repeat themselves. Instead of working with 10,000 loose bits, you can download some of the packages other people already put together and then you’re working at a higher level of abstraction — you only need to put together 50 pre-built LEGO constructions of 200 little blocks each. If you trust the work of the people who packaged up those LEGO arrangements, then you don’t need to think about the individual block details. You can connect the roof piece to the house piece instead of thinking on the level of tiles and bricks. And who even has time for that anyway? (Maybe, when you’re done, you’ll package up your 10K piece masterpiece so someone making a 100K epic can insta-copy it and save time too. All hail GitHub!)

But here’s the thing: even if you didn’t have to do all of it yourself (thank goodness), every instruction among those 10,000 steps was agonized over by a human brain… and that’s the part that goes away with ML/AI.

Machine learning takes you from a high level of abstraction to a new level of distraction.

There’s a lot of huffing and puffing in ML/AI engineering, but most of it is about spinning up and wrangling unfriendly tools. You might write 10K lines of code in your project, but most of it is in service of coaxing those unwieldy tools into accepting your instructions. As the tools get better and better, you’ll eventually see that there are only two real instructions in ML/AI:

  1. Optimize this goal
  2. on this dataset.

That’s all. Now you can use two lines of human thought to automate your task instead of 10,000. This is beautiful — and scary!

Whose job does AI really automate?

Some tasks aren’t very important and it’s fabulous that we can get them out of the way without much thought. You can get things done faster! You can get things done even if you don’t know how to do them! That’s the source of the ML/AI feeding frenzy among those who aren’t blinded by sci-fi — and the feeding frenzy is real.

ML/AI lets humans skip handcrafting those 10,000 explicit solution steps themselves and instead automatically comes up with those 10,000 lines (or something like them) by making a solution out of patterns in examples a developer gives it.

The fundamental difference is the amount of thoughtfulness built in.

Prepare to have your mind blown if you’ve never pondered whose job ML/AI actually automates:

A developer automates/accelerates other people’s work.

ML/AI automates/accelerates a developer’s work.

Instead of coding up “do this, then this, then this, then …” you can say, “try to get a good score on these data.” In other words, “here’s what I like, let me know when one of your monkeys on a typewriter gets there.”

(Don’t worry, there’s plenty of wrangling to do to get datasets ready for algorithms to deign to run on them, so software engineers aren’t about to go out of style. The way they work is poised to change, though, as they shift from telling the computer what to do via instructions to telling it what to do via data.)

Thoughtlessness enabled

It’s time for the punchline! Here’s the most immediate ML/AI-specific problem: thoughtlessness enabled.

When the wellbeing of our fellow humans is at stake, thoughtlessness is a hazard. ML/AI is a thoughtlessness enabler.

When it matters, will whoever’s in charge of the project really put 5,000 instructions’ worth of thought into each of those 2 ML/AI lines? Really, really?

What else did you forget to think through?

Which examples?

ML/AI is about expressing yourself with examples, so you have the unfortunate option of pointing your system at a dataset without ever verifying that what’s inside is relevant, unbiased, high quality examples. And now for the Hemingway lecture on where AI bias comes from…

AI bias: inappropriate examples, never examined.

Which goals?

You also have all the runway to flippantly pick a goal that sounds good in your head and turns out to be a terrible idea. “Catch as much spam as possible” is something a leader might say to a human developer in expectation of a solid and sensible filter. Say it the same way to an AI algorithm and you’ll soon start wondering why no new email is coming in. (Answer: flagging everything as spam gets a perfect score on your stated objective).

Any fool can belch out a flippant goal. Unfortunately, a learning system will hold them to it.

All of this is aggravated by a strange mysticism that clings to the words “brain” and “mathematics”—I suspect it lulls people into thinking even less about what they’re doing when they choose their goals and examples. Alas, there are no brains here but your own and the math is a tiny layer of objectivity in the middle of your subjectivity sandwich.

Math is a thin layer of objectivity in the middle of your subjectivity sandwich.

Oh dear. As the tools get better, barriers to entry will be so low you’ll trip over them on the way to the bathroom… which is great for small personal projects. But when it comes to projects with the power to impact others, ML/AI demands that those in charge put in more effort, not less. It demands skilled leadership. Are we up to the challenge?

“Give me a place to stand and with a lever I will move the whole world.” -Archimedes

Technology, the great lever

Technology improves our world, expands our horizons, gives us longer lives, and allow our species to feed itself despite our unrestrained urge to multiply into the billions. It can also surprise, destabilize, and redistribute. The more it scales, the more disruptive it can be. It’s a lever that expands human potential, but whenever you enlarge yourself with technology, watch out! It’s easier to step on the people around you.

It’s always more appropriate to think of your tools — including AI — as extending you, rather than being autonomous. When they enlarge you, be sure you’ve got the skills to avoid stepping on those around you.

When we enlarge ourselves with technology, it’s easier to step on the people around us.

Even though many of the issues connected with AI ethics aren’t AI-specific, AI could cause extra inflammation in some sore spots. That’s why it makes sense that those discussions are seeing renewed vigor.

A recipe for negligence amplified

If you ask me whether I’m scared of AI, what I hear you asking me is whether I am scared of human negligence. That’s the only way the question makes sense to me, since I don’t believe in robot fairytales or talking to pet rocks.

Move over, ethics of creating artificial life. Hello, ethics of thoughtlessness of scale.

Take that list of 9 topics we started with and pour in more, more, MORE scale and speed. When you add a thoughtlessness enabler to that equation, you get a recipe for rapidly amplified negligence.

The scary part of AI is not the robots. It’s the people.

With greater power comes greater responsibility, but are people rushing to build new muscles needed for responsible leadership in a society fueled by data at scale? No wonder we worry about being stepped on.

Am I afraid of AI?

No.

I’m optimistic about humanity’s AI future, but I’m also doing as much as I can not to leave it to chance. I’m convinced that the skills for responsible leadership in the AI era can be taught and that people can build safe and effective systems wisely, driving progress and making life better for those around them. That’s why I (and others like me) choose to step up and share what we’ve learned the hard way, through experience or ferreting around in previously-siloed academic disciplines.

AI is how we reach past the low-hanging fruit to solve humanity’s most challenging problems.

As we put together our collection of voices to train a new breed of leader skilled in decision intelligence, we hope to help new generations build AI more thoughtfully and unlock the best side of technology. The same side that takes us to the stars, frees us from disease, dissolves ignorance, conserves resources, and connects us to loved ones halfway across the world.

Technology can be wonderful if we let it… and I believe we will.

Thanks for reading! How about an AI course?

If you had fun here and you’re looking for an applied AI course designed to be fun for beginners and experts alike, here’s one I made for your amusement:

Enjoy the entire course playlist here: bit.ly/machinefriend

Liked the author? Connect with Cassie Kozyrkov

Let’s be friends! You can find me on Twitter, YouTube, Substack, and LinkedIn. Interested in having me speak at your event? Use this form to get in touch.

--

--

Chief Decision Scientist, Google. ❤️ Stats, ML/AI, data, puns, art, theatre, decision science. All views are my own. twitter.com/quaesita