Artificial Intelligence (A.I) is probably one of the most misused terms in technology and data science. If features in articles, books, politician speeches, hefty daily-rate consultancy companies PowerPoint slides. Even the Pope talks about it [1]. As of 2019, according to Gartner, 1 out of 3 corporates claimed having implemented A.I. "in some form" [2].
But is Artificial Intelligence actually there?

First, what is Intelligence exactly?
Getting entangled in the controversy on what defines Human Intelligence is out of scope, however I will borrow some ideas from Judea Pearl and his Ladder of Causation [3]. Pearl defines intelligence as a three steps process, with "learning by association" on the first rung of the ladder, figuring out causality and assessing upfront the impact of interventions on the second rung, learning by imagining counterfactual realities on the top level.
What we have in common with animals and machines is the ability to detect patterns in past events to predict a future outcome. For example our past experience with weather will make us infer that clouds are associated with high probability of rain. What distinguishes us from machines and animals, is instead being able to establish causal relationship between events, but most of all to imagine alternative realities. Human Intelligence allows answering questions like : "what would have happened if I had acted differently"?
Where do we stand?
Arguably, what people refer to when talking about A.I. is barely Machine Learning, a set of statistical methods, self-improving (learning) algorithms, and performance optimisation procedures, excelling at the task of "learning by association". Exactly what sits at the rung one of the Ladder of Causation
Machine Learning algorithms are what powers the data products pervading our daily life, such as our social media feeds, our GPS systems or our home assistants. Their capability grew exponentially in the last decade, mainly for two reasons: 1) In-memory computational capabilities scaling up an order of magnitude in the last few years (both vertically and horizontally); 2) The availability of huge sets of data to feed to those algorithms.
However, their Artificial Intelligence is limited to the mimicking of a set of human cognitive capabilities, like detecting objects in a picture or predicting the next word in a sentence. They are complex functions, mapping an input to an output, where data is represented in a certain be-spoke way and fed to a function self-improving through a feedback mechanism and smart optimisation tricks. Neural Networks for example, which are sold interchangeably as A.I, Deep Learning or Machine Learning, are in fact an iterated sequence of matrix multiplications.

The last few years witnessed a phenomenal set of discoveries supercharging Machine Learning, such as back-propagation to start with, convolutional neural networks, reinforcement learning, attention models, to cite a few. Consequently, Machine Learning algorithms are now capable of human-like performance in specific tasks in the field of computer vision, language translation, and playing complex games like Go, but they are still unable to autonomously transfer the skills they have acquired for one task to a different context. A company might claim having introduced A.I. in their processes, after developing an algorithm predicting whether a customer will buy Chocolate Bar A vs Chocolate Bar B, but the algorithm will not be good at predicting that the customer might chose Candy Bar C, if not explicitly programmed to do so. Which is to say, the machine is unable to imagine a counterfactual reality.
For every single task, models need to know in advance what kind of data they are been fed and what problem they are supposed to solve. Think at one of the fields most commonly associated with A.I: autonomous cars. Every single aspect of driving a car in chaotic real world requires an ad-hoc effort. For example, our brain can take instantaneous decisions on the amount of brake pressure necessary to avoid collision with moving obstacles. Mimicking to perfection and deploying this single task requires teams of engineers and millions of investment (often in the availability of just a handful of private companies, but that’s a different story).
So what about the A.I Revolution?
Neil Sahota and Micheal Ashley dedicated an entire enjoyable book to the "A.I Revolution" [4]. They reckon defining Intelligence remains an open question, but they state: "true A.I. has no set limits precisely because it can modify the way it thinks – it can draw meaning from information and experiences to complete an assigned task". They make it clear further ahead in the book that A.I. is not about mimicking human cognitive capabilities and that things like Siri, Alexa or Cortana cannot be considered A.I., since "no independent learning or decision-making is involved". However, subsequently they define three kinds of A.I.:
- Artificial General Intelligence (AGI). Which according to them is not there yet, with still a long way to go.
- Artificial Super Intelligence (ASI). They leave this to the realm of sci-fi.
- Artificial Narrow Intelligence (ANI). Which is about machines "using algorithms to make decisions regarding a single subject". Oddly enough the Authors rule out Alexa and Siri from the family of A.I. algorithms but they include the vacuum cleaners iRobot Roomba.
At this point understanding what’s exactly the difference between a Machine Learning algorithm and Artificial Narrow Intelligence becomes difficult. The boundary blurs. And the Authors contribute to the confusion with an ominous final quote by computer scientist Lawrence Tesler’s: "A.I. is whatever hasn’t been done yet."
It’s not just a matter of semantic…
Understanding the difference between Machine Learning and Artificial Intelligence is not just a practitioners coffee table talk. The algorithms we often see incorrectly labeled as A.I. unlock a huge set of use-cases and can make the difference for our society and for private companies, but they are not intelligent at all. They incorporate the bias, the defects and the objectives of their very human manufactures. Having that clear in mind helps managing expectations in regards to both their positive impacts and the damages they can provoke. As Cathy O’Neill put it bluntly [5]:
Machine learning […] algorithms do not have an embedded model of the world that can reliably distinguish between the truth and the lies.
Many of the self-elected A.I. algorithms deciding the rating of a school teacher or who may be getting a loan are sometimes just badly coded Machine Learning algorithms. And as such they should be kept in check.
Conclusions
Artificial Intelligence is far from being a mission accomplished and more of a journey on the making. It features often in PowerPoint presentations, less in real life. On the flip side Machine Learning has impressively advanced in the last few years, particularly in fields like Natural Language Processing (check out what GPT-3 algorithm is capable of). This can be seen as a necessary pre-requisite to develop machines that can build their own model of the world, make a leap from correlation to causality and reason by counterfactuals.
But then…are we sure that we know what we want?
References
[3] The Book of Why, Judea Pearl and Dana Mackenzie, 2018, pp. 27–37
[4] Own the A.I. Revolution: Unlock Your Artificial Intelligence Strategy to Disrupt Your Competition, Neil Sahota and Michael Ashley, 2019, McGraw- Hill, chapter 2–6.
[5] Weapons of Math Destruction, Cathy O’Neil, Broadway Books, 2016, p.199.