The world’s leading publication for data science, AI, and ML professionals.

What Would the World Look Like if AI Wasn’t Called AI?

A thought experiment of what could have been

ARTIFICIAL INTELLIGENCE | OPINION

Photo by Marjan Blan | @marjanblan on Unsplash
Photo by Marjan Blan | @marjanblan on Unsplash

The field of AI could have many names. Artificial intelligence is probably the less accurate of all.

When the founding fathers of AI met in 1956 to find a name for the field, the objective they had in mind was creating a machine with human-like intelligence, behavior, and even sentience. An artificial general intelligence (AGI). However, at that time neither hardware, software, nor data science were mature enough to achieve that goal. They were naive into thinking AGI was easily attainable.

Nowadays, the promises of AI and the dreams and desires of its founders are largely forgotten. We’re creating effective systems that are good at extracting patterns from data to make predictions. But we’re not aiming at building AGI anymore (at least most of the research and projects aren’t). Yet, the field is still called Artificial Intelligence.

It’s a catchy concept. I even like writing down the words "artificial intelligence." It sounds like we’re creating the future as in the sci-fi stories from Asimov’s books. What would happen if we changed the name now? Or, to depict a more historically plausible scenario: What would the world look like if artificial intelligence never got this name in the first place?


The repercussions of the name Artificial Intelligence

Around the 1940s, discoveries in the fields of neuroscience and computer science generated a wave of new research interests, which gave birth to several disciplines. Cybernetics, a field concerned with how systems regulate themselves through feedback. Artificial neural networks, which found inspiration in brain neurophysiology. And artificial intelligence, studying the question of how intelligent agents perceive the environment and take actions to affect it.

At that time, the disciplines didn’t have those names. They were a mix of ideas and approaches oriented to solve shared problems. But in 1956, John McCarthy and others decided to split from cybernetics and founded a new field of research. McCarthy proposed artificial intelligence as the name. Other ideas were considered but dismissed. AI proved to be a good choice, as it caught the attention – and hoarded the funding -, pushing cybernetics to the social sciences, and sending ANNs to oblivion for more than 20 years.

Artificial intelligence, which original goal was creating AGI, is aimed at solving intelligence. Even today, each headline talking about the supposedly intelligent systems we’re building has the words "artificial intelligence" on it. The field of AI has achieved a lot since its conception, but nothing that could be called "artificial intelligence."

In an article for the Wall Street Journal, Microsoft CTO Kevin Scott explained why "artificial intelligence is a bad name for what it is we’re doing." The words "artificial intelligence" remind us of our intelligence. If we were to ask a layperson what artificial intelligence systems can or can’t do, they’d make "associations about their own intelligence, about what’s easy and hard for them," says Scott. "[People] superimpose those expectations onto these software systems."

Daron Acemoglu, an economist at MIT agrees, "I think AI is somewhat of a misnomer." For people who aren’t familiar with AI research, the words artificial intelligence make their imaginations wander. Terminator and The Matrix. Isaac Asimov’s Foundation trilogy and Arthur C. Clarke’s 2001: A Space Odyssey. Artificial intelligence is ingrained in our culture and that’s the meaning it has for most.

But not everyone thinks the name has had that much impact. Viral Shah, CEO at Julia Computing, thinks we shouldn’t get "hung up on semantics." But, is it just a matter of semantics? Would AI have received the attention it got in its early days if it were called, as Herbert Simon proposed, "complex information processing?"


A matter of semantics? – The power of language

Why did John McCarthy choose the name "artificial intelligence?" In his book Defending AI Research, he gives an unexpected reason:

"_[O]_ne of the reasons for inventing the term "artificial intelligence" was to escape association with "cybernetics." Its concentration on analog feedback seemed misguided, and I wished to avoid having either to accept Norbert Wiener as a guru or having to argue with him."

In the beginning, people were doing AI within the larger field of cybernetics in combination with early artificial neural networks. But McCarthy didn’t seem to like Wiener, one of the leading cyberneticists. He didn’t choose the name to accurately depict the work they wanted to do. He chose AI to differentiate and take distance from another field and the people working on it.

And he succeeded. AI gained prominence and enough money to fund the research for several years. The degree to which the name impacted how the events unfolded is non-measurable. But I suspect it’s one of the main reasons behind its attractiveness.

Now, let me tell you two stories about the importance of names.


Language influences reality – No trace of blue

How we name things affects greatly how we perceive those things, and by extension, the world around us. Reality influences language. We talk and think about objects around us and events that happen to us. Our reality defines what we use language for. But language also influences reality. In a more fundamental way than you may think.

William E. Gladstone, former prime minister of the UK, was the first in noticing something strange in how ancient Greeks perceived color. In his studies of Homer’s The Iliad and The Odyssey, he realized that for Homer the sky wasn’t blue and the sea was "wine-looking." Linguist Guy Deutscher studied Gladstone’s essays and found an explanation: It isn’t that Greeks lacked an evolved visual system. They simply didn’t have a word for the color blue.

But is this a matter of semantics, as Shah defends it’s the case for AI, or was language affecting the perception of ancient Greeks?

There’s a perceptual phenomenon called categorical perception. It explains why, although color changes along a continuum (light wavelengths), we perceive only categories of color (blue, red, yellow, green…). Because the Greeks didn’t have a word for blue, the corresponding wavelengths pertained to the closest categories such as black or green. We’re good at discriminating between categories but bad at discriminating within categories.

The Greeks had the same visual system as we do, but worse color perception. And language was the reason.

Language influences behavior – India or delta?

But let’s bring up a more recent example. Let’s talk about covid.

The US (and the whole world) is suffering a new rise in coronavirus cases. The infamous delta variant seems to be the cause behind this wave – with the help of the anti-vaxxers and the policies in favor of lifting restrictions. But the delta variant wasn’t always named delta, it was called the Indian variant. The WHO decided to change the names of covid mutations from the country of origin to letters from the Greek alphabet.

The aim was to stop the stigmatization of people from those countries. When Trump referred to covid-19 as "the Chinese virus," acts of racism against the Chinese population in the US increased drastically.

Trump using one word instead of another made the lives of the Chinese population a nightmare. Language changed people’s behavior.


How we use names influences reality and behavior. Specifically, names have more weight in how we perceive concepts that are alien to us. Calling a place to sit a chair, a stool, or a sofa doesn’t change anything. We’re familiar with it and know its function. However, if I talk about cryptocurrency, quantum computing, or artificial intelligence, the names matter a lot. Those names hide within a vast land of layered meaning not reflected in the words.

Someone who doesn’t know what AI is about would try to infer its meaning from its name. The name of a complex, new discipline for sure impacts its destiny.


A world in which "complex information processing" won over "artificial intelligence"

In 1956, Herbert Simon, one of AI’s founders, proposed a different name for the field. He argued it should be called "complex information processing." The name can’t be more ugly. However, it’s more representative of what AI people do. He didn’t like the term "artificial intelligence" and for years thereafter presented his work under the name he proposed.

What would have happened if Herbert Simon got away with it and AI was instead called complex information processing (CIP)? Let’s imagine what would be different.

CIP wouldn’t be connected to human intelligence

One – if not the main – incentive to make AI work was the promise to solve intelligence, and by extension, human intelligence. The possibilities it opened were simply unimaginable. CIP isn’t a name that makes promises. It doesn’t say "the Future will pass through me." And it for sure doesn’t promise anything about us, humans, at all.

We don’t associate information processing with human intelligence. Even if the brain does information processing, we never call it that way, so it doesn’t interest us anywhere near AI does. AI concepts constantly remind us of our biology. Many concepts in AI are named after neuroscience concepts we all know – even intuitively: Neural networks, computer vision, deep learning, natural language processing, attention mechanisms, long short-term memory (LSTM). Equally important is that we often refer to AI systems as "understanding" data or having "thinking" processes.

CIP isn’t as flamboyant, but it’s more real.

CIP isn’t an attractive name

Complex information processing is a boring name. It almost sounds administrative. Artificial intelligence is attractive. Because AI sounds so good, it gained interest and funding very early on. CIP may have never gotten all that funding in the first place (sure, the actual projects matter but it’d be ingenuous to think it’s the only thing that does). Symbolic AI would have never existed and maybe the AI winters – and summers – wouldn’t have happened.

CIP wouldn’t have caught the attention of those outside the field. Investors, politicians, journalists, and even laypeople, wouldn’t care that much about a boring-sounding field such as CIP. It wouldn’t make as many headlines as AI does. Even if the work being done is the same, it simply sounds non-disruptive.

CIP is an honest name

CIP represents more closely what people are doing in the field. AI aren’t potentially dangerous robots, all-powerful virtual minds, or superhuman machines. AI is complex information processing. If we remove the illusory layers of hype and attention from AI, only then we’d find the real Science. In CIP we wouldn’t constantly garnish our work with big unfeasible promises.

Yet, we can’t take out of the equation the fact that the field may not have had the amount of funding it has if it were called "complex information processing," and therefore not even what we have achieved would have been done. Naming the field CIP instead of AI is a trade-off between attractiveness and honesty. CIP may depict an authentic view of the guts of the field, but may not attract as much money, interest, and human resources.

If instead of making promises we’re honest about the work we’re doing, we may not get enough money to do the work. If we "guarantee" AGI in the next 20–30 years, then we may get a lot of funding and achieve more in the end, even if that promise is broken.

However, this approach is what generated the AI winters in the first place. Big promises attract big funding → big promises are too big → withdrawal of big funding → no money to do the most simple work → the field stagnates.

Finding a balance between attractiveness and honesty is key to not disappoint those on which we depend.

Hype claims wouldn’t be so usual – or credible

If a physicist were to claim: "We’re going to be able to create a star in 20–30 years," no one would believe it. Then, why do we keep listening when some AI guru claims we’re going to build AGI in a few decades? Marvin Minsky said in 1967 that "within a generation […] the problem of creating ‘artificial intelligence’ will substantially be solved." It didn’t happen then. It hasn’t happened now. And it doesn’t seem it’ll happen anytime soon.

People kept claiming the ultimate goal of the field was just a few decades ahead. Symbolic AI achieved some modest successes. Then AGI must be around the corner. Machine learning systems learned to recognize words and classify objects. Then AGI must be around the corner. Deep learning systems generated human-level text and dreamy pictures, unveil the mechanisms of protein folding, and can (almost) drive a car. Then AGI must be around the corner.

Well, AGI isn’t around the corner. We’d see it more clearly if a complex information processing system was the best chess player in the world and not an artificial intelligence. Because the name contains "intelligence," it seems that we’re working towards the original goal, but it isn’t the case anymore. We’re building narrow information processing systems. If a researcher working in CIP were to claim AGI was around the corner, people would laugh: Why are you even mentioning intelligence, mate?


Final thoughts

We can’t know what the world would look like if John McCarthy never coined the term artificial intelligence. This article is mere speculation. However, this thought experiment is valuable to make us reflect on something that we take for granted. How does what-could-have-been influence the way we see and understand what it is?

I was first attracted to AI because the words "artificial intelligence" sound very futuristic, very sci-fi. AI seemed to keep the secrets of the future. And it isn’t just me, AI makes a "conscious" effort to occupy that space in our collective imagination. It changes and adapts to comprise those aspects of mind-Technology that are still out of reach. When something that was considered AI starts to be well understood, we stop calling it AI. It stops being artificial intelligence and starts being "just" fancy mathematics and statistics. AI has an enviable position in our technological and social priority hierarchy and, at the same time, it deceives us into thinking it’s the oasis we’re looking for.

Does it make sense that a field of research moves forward to include only what seems beyond our capabilities? It reminds me of Arthur C. Clark’s phrase: "Any sufficiently advanced technology is indistinguishable from magic." We want AI to be that advanced technology that we can only dream of. But if we treat AI like magic, we’ll be disappointed when we see the rabbit in the hat.


Subscribe to my free weekly newsletter Minds of Tomorrow for more content, news, insights, and reflections on Artificial Intelligence!

Also, feel free to comment and reach out on LinkedIn or Twitter! 🙂


5 Must-Know AI Concepts In 2021

What They Don’t Tell You – 4 Ways Humans Still Vastly Outperform AI

Unpopular Opinion: We’ll Abandon Machine Learning as Main AI Paradigm


Related Articles