The world’s leading publication for data science, AI, and ML professionals.

5 AI Misconceptions Debunked

Artificial intelligence can be a very confusing topic

Photo by David Matos.
Photo by David Matos.

Ask a hundred people what they think Artificial Intelligence (AI) is, and you’ll likely get a hundred different answers. It’s a broad term with a near-unlimited number of interpretations and an equal number of misconceptions. This is precisely why my client meetings, workshops, and lectures on artificial intelligence inevitably feature a two-minute segment where I ask the audience how they perceive AI. Without fail, there are always some misconceptions that must be addressed before any discussion on AI can begin.

Some people have been exposed to AI through dystopian Hollywood flicks, some through philosophical podcasts, and some through deep learning-tutorials. Indeed, there are many entry points to the world of artificial intelligence. People’s perspective varies greatly depending on what gateway has served as their introduction into the world of AI.

A handful of misconceptions reign supreme. There are the misconceptions about AI that I most often encounter, even from technical professionals.


AI and Machine Learning are the same thing

Let’s start with the absolute basics. Artificial intelligence and machine learning are not synonymous terms. Using these two terms interchangeably can cause communicative errors.

The term artificial intelligence has no unanimously accepted definition, so let’s break it down. The term is a two-parter: artificial and intelligence.

Artificiality is a term you may be unfamiliar with, yet it is very straightforward. It refers to an object that has been created by humans, as opposed to an object that grows naturally in nature. As such, the clothes you wear, the bed you sleep in, and, of course, the phone you look at memes with are all artificial.

Intelligence, then, is a term that virtually everyone is familiar with, yet one that paradoxically no one can really define. What is intelligence, exactly? Many philosophers and scientists with far greater minds than myself have asked that question. In essence, you could say that intelligence refers to the ability to perceive and comprehend one’s surroundings.

What does that make of artificial intelligence, then? The term AI must simply mean that some human-made object has some form of ability of comprehension.

That said, you could refer to artificial intelligence as a vast collection of technologies that provide artifacts (in practice: computers) with an ability to comprehend. AI is a broad term that encompasses many technologies, one of which is machine learning.

Machine learning is a technique used to allow computers to learn things by themselves, sometimes under human supervision and sometimes autonomously.

When an AI defeated the world champion in chess in 1997, the AI wasn’t developed using machine learning. Instead, it operated on rules defined by humans: a list of best possible moves in any scenario, if you will. Yet, I would consider it artificial intelligence, as the intelligence that bested human players was an artificial form of intelligence.

"AI is a broad term that encompasses many technologies, one of which is machine learning."

By this logic, then, wouldn’t one be able to argue that a printer is using artificial intelligence when it tells its owner that it’s out of ink? Yes, absolutely. It’s not machine learning, it’s not self-taught, and it’s not particularly smart. But the machine clearly observed, comprehended, and communicated that it was out of ink, all by itself.


AI and Artificial General Intelligence are the same thing

This misconception is prevalent among non-technical audiences, especially those who have only ever been exposed to artificial intelligence through works of fiction.

Artificial general intelligence (AGI) is perhaps the ultimate level of AI. AGI is a form of AI that can accomplish any and every task that humans can perform, at least equally as well as humans. AGI is a popular subject in works of fiction. The AI seen in Hollywood films such as The Terminator (1984), I, Robot (2004), and Her (2013) are all examples of AGI.

To be clear: AGI does not exist. All examples of AI that exist in the world today are AI that has been made to execute one specific task (these AI are sometimes referred to as modular AI or narrow AI). When corporations and governments implement AI solutions, they aren’t implementing Terminator-esque superintelligent beings. They are implementing modular technologies built to perform one specific task.

Photo by Markus Spiske.
Photo by Markus Spiske.

AI is only used for automation

Another common misconception is that the only use case for artificial intelligence is automation. The reality is that AI can be used for two primary purposes: automation and augmentation.

  • Automation is the removal of humans from an activity.
  • Augmentation is the empowering of humans in an activity.

Automation and augmentation are opposite extremes, and few AI solutions are fully automated or fully augmented. Automation and augmentation is a scale that encompasses four strategies.

  1. The efficiency strategy, in which activities are optimized through automation.
  2. The effectiveness strategy, in which activities are made seamless, enabling easier communication.
  3. The expert strategy, in which AI empowers decision-making.
  4. The innovation strategy, in which AI enables creativity.

Examples of augmentative AI include machines that help doctors diagnose patients, help financial advisors make monetary decisions, or help product developers invent new products.


AI was invented recently

Believe it or not, but the idea of bringing intelligence to objects, today referred to as artificial intelligence, has been around for at least 2,000 years. Automated reasoning was exemplified in writing by great philosophers ages ago in ancient Greece. However, while they could theorize about the subject for days, they had no way to implement it.

Practical implementations of AI have been developed for as long as we’ve had computers. Did you know that, while self-driving cars have become a hot buzz in recent years, it’s been researched since the 1920s?

There are three reasons why AI has become a hot topic in recent years:

  1. An explosion in user-generated Data.
  2. Computers have become both more powerful and more affordable.
  3. Breakthroughs have been made in algorithmic research.

Artificial intelligence is bigger than it has ever been, but it’s not a new topic.


Artificial General Intelligence is far away (or will never happen)

As I mentioned earlier, artificial general intelligence (AGI) is a theoretical form of AI that can do everything humans can do either equally as well or even better. Of course, AGI doesn’t exist.

A lot of people are quick to dismiss AGI as sci-fi nonsense. Some confidently argue that AGI will never happen; others that AGI is centuries away. The truth is simply that no one knows. Some people that we will discover AGI within decades, some in centuries, and some that we never will. But researchers take this topic seriously.

If our planet had been created a year ago, humanity would have existed on the planet for ten minutes, and the industrial era would have started only two seconds ago. The internet would have existed for mere milliseconds. The amount of technological progress we have experience in this small timeframe is staggering. I personally find it likely that we will invent AGI someday, and when that happens, we need to be ready. This is also the mission of the Future of Life Institute, which takes the question of AGI very seriously.


Thanks for reading! If you enjoyed this article, you will probably enjoy my book on artificial intelligence:

This Is Real AI: 100 Real-World Implementations of Artificial Intelligence


Related Articles