The world’s leading publication for data science, AI, and ML professionals.

Are You Afraid? 3 Reasons Why AI Scares Us

AI could be dangerous if we don't do it right.

ARTIFICIAL INTELLIGENCE | PHILOSOPHY

Photo by Andrew Boersma on Unsplash
Photo by Andrew Boersma on Unsplash

A general artificial intelligence may be far in the future, but we have reasons to be extremely careful.

For some years now, important public figures have raised concerns about the potential dangers of AI. The discourse revolves around the idea of superintelligent AI freeing itself from our control. Some skeptics argue that the scenario of AI "enslaving" us is so distantly dystopic that it isn’t worth considering. For instance, Gary Marcus ridiculed it saying that "it’s as if people in the fourteenth century were worrying about traffic accidents, when good hygiene might have been a whole lot more helpful."

Although there’s no proof a superintelligence would overrule us, the fact that we will eventually build one isn’t so disparate. Thus, assessing how an all-powerful entity could cause harm to humanity is paramount if we want to continue this path. That’s what Elon Musk and Stephen Hawking – among others – have been saying since the deep learning revolution took off. In 2018, Musk explained how AI could foster an existential crisis at the South by Southwest tech conference in Austin, Texas:

"We have to figure out some way to ensure that the advent of digital superintelligence is one which is symbiotic with humanity. I think that is the single biggest existential crisis that we face and the most pressing one," to which he added: "Mark my words, AI is far more dangerous than nukes."

A few years before that, in 2014, renowned physicist Stephen Hawking told the BBC about the dangers of AI outsmarting us:

"The development of full artificial intelligence could spell the end of the human race. […] It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded."

Whether these fears are exaggerated is debatable. For instance, there are other, more pressing problems that we should tackle with haste: Job losses in virtually every industry, and lack of ethical values in AI systems, or environmental damage to mention a few. In this article, I’ll describe 3 reasons that cause some prominent intellectual figures to be afraid of AI.


Loss of control – Could AI get rid of its chains?

The ultimate problem we’d face if we build a superintelligence is this: How can we keep it under control, preventing it from finding the means to outwit us and set itself free? Finding a reliable solution is crucial because if we somehow end up building a malfunctioning superintelligence and it releases itself, there’s no way we could trap it afterward.

Neil Degrasse Tyson explained this scenario in the 2018 Isaac Asimov Memorial Debate referencing Sam Harris podcast. Tyson thought the solution was as simple as keeping the AI in a "box," disconnected from the rest of the world. And "if it gets unruly or out of hand [we] just unplug it." However, Harris’ podcast host explained that the AI "gets out of the box every time." How? Because it’s smarter than we are.

"[A superintelligence] understands human emotions, it understands what I feel, what I want, what I need. It could pose an argument where I’m convinced I need to take it out of the box. Then it controls the world."

We have a hard time imagining what that argument could be. But, as Tyson explains, we don’t even need to think what form it’d take. We can understand it by making an analogy with chimps. Let’s say we want to capture a chimp in a cage. The chimp knows bad things will happen so it doesn’t want to go in. Suddenly, we throw a bunch of bananas inside the cage. The chimp wants the bananas, so it goes in, and we capture it. We’re smarter than chimps, the same way a superintelligence would be smarter than us. The chimp couldn’t imagine we’d know about bananas or know how much it likes them.

In Tyson’s words:

"Just imagine something that much more intelligent than we are, that sees a broader spectrum of solutions to problems that we’re incapable of imagining."

Assuming that we will be able to build a superintelligence eventually – and that we’d do it in that case -, then the above scenario is plausible. Experts have defined two approaches to avoid becoming a chimp-level species in the face of a superintelligence. First, the idea of capability control: We have to make sure we limit the capability of the superintelligence to nip in the bud its potential to do us harm or gain control. That’s the argument of AI in a box. It is considered an unreliable solution as we’ve seen. Still, it can help take measures if we combine it with the second approach: Alignment.

How can we align AI’s goals with human values?


Lack of alignment – Can we make sure an AI is always beneficial?

Given that we may not be able to keep a superintelligence under our direct control, the second-best solution is to have shared goals and values. In this case, it doesn’t matter whether the AI can act outside of set boundaries in a fully autonomous way. It’d always take care of our preferences and will always benefit us.

In theory, this looks good. We’re free to define our values and preferences in any way we want. Aligning a superintelligence with that would be like having a servant-god that wants – and will want under any circumstance – to be our servant. In his book Human Compatible, UC Berkeley professor Stuart Russell explains that those preferences would be "all-encompassing; they[‘d] cover everything you might care about, arbitrarily far into the future."

However, when it comes down to implement this solution, we’ll face some tricky issues. How can we define human preferences so an AI can understand them? How can we find universal values that equally benefit all humanity? How can we make sure the AI’s behavior would ultimately lead to the satisfaction of those shared interests? How can we implement our desires so that nothing is left unsaid and there aren’t implicit variables?

All those questions point to alignment issues. To answer them, proponents of this approach aim at harmonizing three descriptions of an AI system:

  • Ideal specification: What we want the AI to do.
  • Design specification: The objective function we use.
  • Emergent behavior: What the AI does.

The goal is to align the ideal specification with the emergent behavior. If there’s a mismatch between ideal and design specification, they call that outer misalignment. There’s a disconnection between our "true desires" and the actual objective function the AI is optimizing. Russell compares this scenario to "the old story of the genie in the lamp, or the sorcerer’s apprentice, or King Midas: you get exactly what you ask for, not what you want." It’s related to the problem of perverse instantiation.

Inner misalignment, in contrast, refers to deviations between AI’s behavior in its final environment and the goals it pursued originally, while training. Evolution is often used as an analogy for this type of misalignment: We evolved in an ancestral environment and so our inner mechanisms are unfit to help us achieve our goals in the modern world. What made us fit 10,000 years ago can be a hindrance now.


The problem of awareness – Will we see it coming?

The problems of control and alignment expose the situation in which a superintelligence could end up being harmful to us. In both cases, we assume that a superintelligence already exists and, more importantly, that we’re aware of it. This raises a question: Is there a possibility that a superintelligence emerges without us knowing it? That’s the problem of awareness. It points to the essential question of whether we’re capable of foreseeing the appearance of a superintelligence.

From this perspective, we find two cases: In the first case, a superintelligence appears too fast for us to react in an intelligence explosion. In the second case, we’re unaware that it is even happening. That’s the problem of ignorance.

An intelligence explosion – From AGI to superintelligence

Either we arrive at a superintelligence slowly, step by step, following a controlled path carefully planned, or an intelligence explosion occurs as soon as we create a general Artificial Intelligence (AGI). In this second scenario – which Stephen Hawking depicted—an AGI would be able to improve itself recursively until it reaches the Singularity. In the words of futurist Ray Kurzweil,

"Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity – technological change so rapid and profound it represents a rupture in the fabric of human history."

It’s reasonable to think that there’s a level of intelligence such that AI could be smart enough to improve itself. An AI – which is faster, more accurate, and has a better memory than we do – could reach that level without previous warning.

The reason is that narrow AI already performs way better than us at some basic functions. Once it acquires system 2 cognitive functions, its unmatched memory and processing capability will allow it to become a superintelligence faster than we imagine. If this scenario becomes true, we won’t have time to find a contingency plan.

The problem of ignorance – We may be too dumb

Whoever builds a general artificial intelligence first will rule the world. Or that’s at least how it feels to see big tech companies developing and deploying increasingly powerful machine learning systems year after year.

The trend of building large models is in its heyday due to the possibilities of self-supervised learning and the use of supercomputers. But we still can’t answer the question of why we’re doing it this way? The direction we’re following is clear, but how or when we’ll arrive at our destination is unknown. It’s as if we’re running towards a wall blindfolded. We’re convinced that, because deep learning systems are working wonders, this paradigm will lead us to our final stop eventually.

However, there’s an important issue here. What if the problem isn’t that we’re blindfolded but that we’re blind? What if our capabilities of understanding the reality around us are too limited to detect whether we’ve built a superintelligence or not? I’ve talked about this issue in a previous article. I claimed that our physical and cognitive limitations may prevent us from acknowledging the existence of a superintelligence. If we remain unable to develop tools to reliably perceive reality, we’ll remain unaware that a superintelligence is arising in the dark.

If we keep creating powerful models and we’re actually on the right path, we may reach our destination before we know it. And, if the superintelligence arising in the dark happens to be unfriendly, then we’ll be in trouble.


Should we be afraid?

I want to use this last section to briefly share my perspective on whether it’s reasonable to be afraid or not.

Artificial general intelligence will come sooner or later (I’d say more later than sooner). As I mentioned at the beginning, some experts (e.g. Gary Marcus) claim there’s no need to be afraid of the emergence of a foe superintelligence. They think we’re so far away from that, that the mere distance makes caring about the problem absurd. However, they don’t claim it won’t happen. Even if considering the possibility remains science fiction, it’s a nice philosophical exercise.

In line with this stance is the fact that we’re suffering from other AI-generated problems right now. I mentioned the impact on the workplace, ethical issues, and environmental damage. These problems could cause so much damage to society – if tackled without care – that we may never get to the point at which a superintelligence could emerge. If we destroy our planet’s climate, there won’t be a civilization to save from AI. Looking at the broader picture, it’s easier to understand why some people caricature this specific fear of AI overruling us.

If we manage to avoid all AI-related problems lying between us and superintelligence, then we’ll have reasons to fear the problems I’ve stated here. They aren’t today but might be at some point. We can’t afford to arrive at that point unprepared. If the scenario of an intelligence explosion ends up happening, the problem of control would become the number one priority immediately.

But because those are mere hypotheses and speculative predictions, people losing their jobs and AIs growing racist and sexist should remain our principal focus. AI can be dangerous, but how it is dangerous is very far from robots enslaving people Matrix-mode, super-AIs controlling the internet, or ultra-focused machines converting the universe into paperclips.


Travel to the future with me for more content on AI, philosophy, and the cognitive sciences! Also, feel free to ask in the comments or reach out on LinkedIn or Twitter! 🙂


Recommended reading

Unpopular Opinion: We’ll Abandon Machine Learning as Main AI Paradigm

Can’t Access GPT-3? Here’s GPT-J – Its Open-Source Cousin


Related Articles