ARTIFICIAL INTELLIGENCE | NEWS | OPINION

There are many dividing matters in artificial intelligence. Is it possible to create intelligent machines with current paradigms or should we update the principles guiding AI research according to discoveries in the cognitive sciences? Should we keep exploiting the promises of deep learning or should we imbue machines with both knowledge and data using a hybrid approach? Should we expect bigger models to produce increasingly better results, or will we need algorithmic breakthroughs to lead the next stages of AI?
These questions paint the landscape of the present and future of AI, but only a handful of people care about finding the answers. There’s one other aspect of AI, however, that should bother all of us. Including you. It will impact, in one way or the other, the history that’s yet to be written. I’m talking about the risks and dangers of AI. Oddly enough, despite the urgency of the subject, not even in this regard experts agree on what the most pressing issues are.
AI is ingrained into our daily lives to such a degree that anyone not familiar with the topic would surely underestimate just how much. Personal assistants in your smartphone; surveillance and control devices in airports and streets; friendly chatbots in customer services; hiring algorithms that affect your professional future; recommender systems that decide the movies you watch and the products you buy; detection and recognition software that knows who you are and what you look like; and quasi-intelligent cars that will make driving obsolete in the not-so-distant future.
AI’s ubiquity deepens any potential misalignment that could reverberate on many aspects of our lives. It already does. AI experts are fighting to improve the safety, responsibility, and interpretability of these systems. They bet on ethical AI that doesn’t harm minorities and doesn’t spread misinformation. They try to find solutions for the imminent damage on the workforce across blue- and white-collar industries alike. But even in this extremely critical matter, not everyone is on the same boat. Some think we should care more about controlling the potential emergence of a superintelligence. Mo Gawdat, former CBO at Google X is one of them. Here’s why we should take his fear and warnings with a grain of salt.
Are we building God?
Mo Gawdat is scared of AI. He recently published a book entitled "Scary Smart," in which he warns us on the upcoming apocalyptic future that only we can stop. In an interview for The Times, he recalls the moment he realized AI would be our downfall. At one of Google X’s projects, developers were trying to teach robotic arms to pick up balls. After weeks of slow progress, one of the arms reached the ball and raised the hand to show it to the camera, as if it were "showing off" – which, in the eyes of anyone who knows how AI works, is just another example of anthropomorphization. That was the moment that made him realize it was "really scary."
Gawdat wondered why almost no one was talking about this. In a conversation with writer and tech practitioner Ken Yarmosh, Gawdat summarizes his view on where we’re at on the life cycle of AI: "AI is no longer a machine," he said. "We’re building a … digital being that … has every character of what makes a sentient being. So it’s autonomous, evolves, it has intelligence, it develops intelligence … it’s self-replicating … and it has agency."
"We’re building God."
Such a bold claim requires equally strong evidence, but Gawdat only provides anecdotal examples that can very well be explained without resorting to esoteric concepts such as "sentient digital beings," the Singularity, or God. He argues that we don’t realize just how far we’ve come in terms of AI development and mentions some "inevitables" that will happen on our way to his depicted future. (His debate circles around whether that future will be a utopia or a dystopia. He’s confident it’ll come eventually.)
The first inevitable is that AI will happen, in fact "[it] already happened," he says. He thinks deep learning is already AI because it does every task assigned to it better than us – which isn’t true as I’ll show in the next section. The second inevitable is that AI will be smarter than humanity. He references futurist Ray Kurzweil, the "oracle of predicting our future," and the Singularity, his most popular concept: "By 2029 the machines will be smarter than humans." This precise date derives from the argument of exponential growth, which remains a weak defense given that "nothing in nature follows a pure exponential."
Overall, there are two major flaws in his arguments. First, he never gets to define what AI is. It’s impossible to agree or disagree with anyone when the terms discussed aren’t well-defined. In that very conversation, he acknowledges the importance of defining AI – but he doesn’t follow his own premise. Second, he’s extending arguments valid for today to a future about which we know very little. The scenarios of AI misalignment he talks about are already happening today with narrow AI systems. Yet, he uses them to argue that a superintelligent machine is the real threat. Why not focus on what we have before our eyes instead of looking into unforeseeable futures?
Why AI isn’t ‘Scary Smart’
We should be careful with AI but not for the reasons Gawdat describes. AI can be scary, but not because it’s too smart. Almost no AI expert would agree with Gawdat in that the main threat nowadays is that these systems have become or "are about to" become superintelligent. It may happen eventually, but most likely not in eight years, and way after we’ve faced other dangerous scenarios that are present today.
Is AI already here?
AI (as defined in the broadest sense, encompassing all machine learning/deep learning systems) has surpassed us in many narrow tasks, but it can’t reach our level at many others – let alone display intelligence in the general sense.
Algorithms excel at object recognition, one of the most well-studied tasks, but only in very specific conditions. The best vision AI models achieve a striking +90% top-1 accuracy on the ImageNet challenge (which is far better than humans). However, when faced against ObjectNet, a real-world object dataset, this very model experiences a 40–45% drop in performance. ImageNet depicts an idealized version of the world and so the results of the challenge give a distorted view of the real ability of AI on object recognition.
Gawdat recalls that machines have been the best chess players since as far back as 1989 (humans no longer have any possibility of beating the best AI players). DeepMind’s AlphaZero, which obliterated Stockfish 8 two years ago, is one of the best chess players. You couldn’t win against it in a normal game, but you’d become the master just by changing the size of the board from 8×8 to 9×9. The task is extremely similar, but AlphaZero wouldn’t be able to generalize its knowledge when facing the tiniest deviation from what it learned.
Gawdat also mentions that self-driving cars are the best drivers in the world. But not only do they crash more than humans in relative terms, we’re also better at handling unexpected circumstances. The key weakness of autonomous vehicles is that reality has as many degrees of freedom as it gets. Anything can happen, and AI systems aren’t very good at extrapolating from the training set to novel situations. Because they lack a deeper model of how the world works, anything outside their experience becomes an insurmountable obstacle.
OpenAI’s GPT-3, although it’s considered the most powerful public large language model, can’t generate analogies, solve math problems, understand contextual information, reason about the underlying principles of the world, or even link cause and effect. It can generate text in a wide array of forms, but it hasn’t mastered language in the human sense.
AI lacks theory of mind, common sense and causal reasoning, extrapolation capabilities, and a body, and so it is still extremely far from being "better than us" at almost anything slightly complex or general.
Will AI be smarter than humanity?
If we continue with the current rate of progress and no event slows us down (anything from a drastic shift in the sociopolitical system to a global phenomenon – like a climate disaster – could hinder technological advance), it’s logical to think "there’s no way stopping it." However, eight years feel like a small amount of time for AI to reach such a milestone.
Ray Kurzweil, the coiner of The Law of Accelerating Returns, argued in his book The Age of Spiritual Machines, that technology tends to grow exponentially. However, as physicist Theodore Modis explains in a counter-argument to Kurzweil’s predictions, "[his] wrongdoing is relying on mathematical functions rather than on natural laws … All natural growth follows the logistic function."
Indeed, we like to talk about exponential rates of change, like Moore’s law, but these "laws" are only true until they aren’t. There are natural limits to exponential growth and so it’s logical to assume that in reality "nothing follows a pure exponential," as Modis defends.
2029 is the date at which Kurzweil thinks the smartest being on the planet will be an AI. But he calculated it with a simplistic view of how math relates to the natural world – not to mention all the other factors that constantly interfere with the rate of technological progress and could even change its direction completely, like social movements, moral debates, or government regulations.
However, even if we assume AI will eventually become smarter than us, there’s no reason to think, as Gawdat clearly does, that it could decide to "go against us." He’s mistaking intelligence with motivation. As Steven Pinker explains (in a quote borrowed from Gary Marcus’ book Rebooting AI), "[i]ntelligence is the ability to deploy novel means to attain a goal. But the goals are extraneous to the intelligence: Being smart is not the same as wanting something."
We don’t know how to imbue AI with motivation – which is an evolutionary trait that only exists because of the way we’ve evolved. But even if we knew how to, why would we do it? Just because humans have both the intelligence to know how to achieve their goals and the motivation to take action, doesn’t mean they evolved together or they are intrinsically intertwined.
The real problem – Mindless AI
Gary Marcus tweeted a response to Gawdat’s interview with The Times, highlighting the "real challenge" we’re facing today in AI:
AI isn’t that smart, but it’s indeed very scary. Gawdat’s focus on the existential threat hides from our view the current problems that happen every day at all levels of the societal structure. Giving more importance to hypothetical risks that are yet to be understood – which are so far into the future that there’s not even a useful way to debate about them yet – hinders our efforts to face the real dangers of AI.
Mindless AI, as Marcus says, is the real problem. We use it to make decisions and take actions across many industries’ processes of decision-making. How can we do that when these systems don’t understand anything about how the world works or the consequences of their behavior. This is the very reason new branches of AI research, that focus on containing these issues, have started to appear in recent years, among which AI safety and AI ethics stand out.
How can we make sure that a technology that behaves effectively as a "black box," whose decisions are often unpredictable and the reasons behind those decisions are inscrutable, does what we want it to? The problem of alignment, which Gawdat acknowledges, is very real now. We don’t need to wait for a superintelligence to suffer the trouble that AI can cause if it ends up doing something we didn’t expect. Bias is a very harmful pervasive feature of AI systems, which end up becoming racist, sexist, and mostly target underrepresented minorities.
Mindless AI is also very well capable of replacing workers while at the same time generating huge amounts of pollution, increasing its carbon footprint. It’s also the foremost engine of fake news and it has an unavoidable influence in almost every system that decides what we consume in terms of leisure and information. The real dangers of AI are those that are invisible and slowly and silently spread their branches through our ways of living, while firmly cementing its roots on the foundations of our world.
If you liked this article, consider subscribing to my free weekly newsletter Minds of Tomorrow! News, research, and insights on Artificial Intelligence every week!
You can also support my work directly and get unlimited access by becoming a Medium member using my referral link here! 🙂