ARTIFICIAL INTELLIGENCE | OPINION

AI is already an advanced technology, but it could evolve until it becomes a new species. We’d call it machina sapiens.
In the 40s and 50s, scientists grew an interest in computers and the brain. The cognitive sciences and computer science were promising newborn fields that opened interesting research possibilities: Cybernetics, artificial neural networks, neuroscience, artificial intelligence… Slightly different fields coming from the same place but heading to very distinct futures.
Some scientists opted for the cognitive sciences, with humans at the core of their investigations. They defended that the priority was to understand our brains and the underpinnings of biological intelligence and consciousness.
Others thought it possible to build an electronic brain with the cognitive power of humans. For them, understanding the brain came in second place and dismissed the need to follow biology’s footprints. Why would we assume we can only attain intelligence taking nature’s path? Biological intelligence was imprinted in carbon life forms but there was no apparent limitation to creating Artificial Intelligence from silicon.
However, they missed a key question: What would happen to our society if we achieved that goal? What would happen to our civilization if we, homo sapiens, tried to coexist with our silicon-based counterparts, machina sapiens?
Should AI mimic human biology? The eternal debate
Most AI experts agree that AI has to neither replicate the human brain nor dismiss completely our evolutionary inheritance. But there’s disagreement within this agreement. No one knows how to continue building increasingly intelligent AI or how to overcome deep learning’s bottlenecks. AI can do narrow tasks very well, but can’t generalize for the most part. It can’t reason, plan, or interact with the world the way we do. And it’s an easy target of attacks no human would fall for.
Because we don’t know which is the correct path, there are many possible paths.
On the one hand, deep learning advocates defend that we don’t need to blindly follow biology. They argue neural network-based AI is the only way to attain general artificial intelligence. In an interview with MIT Technology Review last year, Geoffrey Hinton said: "I do believe deep learning is going to be able to do everything." Truth is, a decade of deep learning successes backs up these claims. From AlexNet to AlphaZero, to GPT-3, deep learning has time and again reaffirmed its reigning position in AI.
They’re convinced deep learning is the way to AGI, even if it needs some breakthroughs. Self-supervised learning and the transformer architecture are some of the latest arguments in favor of this view, but there are other less convincing aspects. In a study published last year in Science, Yiota Poirazi and her team discovered that biological neurons are even more complex than we previously thought, whereas artificial neurons are built assuming that biological neurons are "dumb calculators of basic math."
This is why, on the other hand, some people say AI is different. It’s different from other technologies in that it’s trying to solve the most complex challenge of all. Not in vain, the human brain is considered the "most complex thing in the universe." Our brain is the only instance we have of what we want to build, it seems reasonable to remember that.
Even more problematic is the fact that we don’t understand the brain and, as Professor Sir Robin Murray foretells, "We won’t be able to." For other techs, we can extract the underlying laws and principles behind the events we try to model. Planes don’t fly like birds, but they sure follow the physical laws of aerodynamics and fluid dynamics. We can’t do that just yet for brains because the neurosciences aren’t mature enough.
For these reasons, people like Gary Marcus argue we should look into the human brain. "We need to take inspiration from nature" to get further in AI. Gary Marcus defends the hybrid model approach to artificial general intelligence. He claims we should integrate the power of deep learning with older paradigms, such as symbolic AI. Data-driven and knowledge-driven AI systems combined could be more capable than the sum of its parts.
Deep learning systems don’t learn, see, or process like our brains. Evolution gave us a wide array of innate structures to make it easier to interact with our world. Why not imbuing AI with those? Otherwise, deep learning systems may not be able to develop abilities like common-sense reasoning, self-learning, or building mental models of the world.
Both sides of the debate hold strong arguments, but it seems no one is paying attention to the bigger picture. Thinking in biological terms falls short to tackle the social dilemma. Let’s get back to the missing question I mentioned earlier: If we pretend to eventually live in an inter-species society, isn’t it obvious we should make AI more human even if it’s the less optimal solution?
A socially compatible AI should be our priority
From a strictly technical point of view, the above debate is meaningful. What’s the best path to create a new tech? At which point – or in which aspects – we should stop/start copying biology? The issue with these questions is that they’re unidimensional whereas the world is multidimensional. Science and technology care about understanding reality and creating useful solutions to improve our well-being, but they stop there.
What about the social dimension? If we take into account the psychological and sociological aspects of this challenge it’s clear we have to follow biology at the very least. Maybe not for today’s "AI" systems. But it will be a requirement if we ever attempt to build human-like AI.
Let’s think about this for a second. We’re aiming at creating human-level intelligent entities. That’s not the same as building machines we can switch off after the day is over. Will human-level AI develop consciousness? How will we interact with these entities? Will AGI be more similar to our smartphones or our brothers and sisters? Do we want AI to have a humanoid form or should we maximize utility at the risk of alienating them?
Pop culture has explored these questions on many occasions. From recent films like Chappie or Ex Machina to old books like I, Robot or 2001: A Space Odyssey. Taking care of these aspects is crucial if we eventually enter a period in which we want AI to be a sentient part of our lives.
Yet, this isn’t an easy task. What happens in today’s society when a person feels alienated from another? Differences in ideology, race, religion, or gender are the cause of some of the most recurrent human conflicts. Seemingly innocuous misalignments are fuel to the most barbaric side of human nature.
Non-human AI would be so far from us, any other human would feel like family. Any human-related misalignment would be negligible in comparison. How could we adapt, as individuals and as a society, to a new, equally capable species?
The question of "should we follow biology?" feels shallow now. It misses the point at this level of the conversation. The debate should revolve around the consequences we’d face if we created an AI that ends up being socially incompatible with us. We should consider people’s well-being above utility at any point in this discussion.
We should imbue biological – and psychological and sociological – features in AI as far as we’re aiming to create them to be generally intelligent.
The dangers of an emotional AI
But there’s a risk we can’t dismiss. Copying biology implies imbuing AIs with emotions and motivations, which is considered one of the most dangerous paths to follow. Emotions – and drives – are the forces behind most of the damage we do to ourselves and others. Yet, if we want AI to be like us, emotions can’t be off the table. The question we should answer then is, "could an emotional AI really overtake and hurt us?"
"To take over the world, the robots would have to want to; they’d have to be aggressive, ambitious, and unsatisfied, with a violent steak. […] For now there’s no reason to build robots with emotional states at all." – Gary Marcus
In his book Rebooting AI, Gary Marcus observes that it isn’t intelligence that could make AI escape our control. It’s not a matter of capacity, but motivation. As Steven Pinker explains, "Intelligence is the ability to deploy novel means to attain a goal. But the goals are extraneous to the intelligence: Being smart is not the same as wanting something."
An AI that wants something is an AI that potentially wants something we don’t want it to want. What would happen if an AI decides that its primary goals aren’t aligned with its current situation and suddenly grows a motivation to change something without our consent?
This isn’t an unlikely scenario, given that we do the same. Evolution built us as we are as species. But what we want as individuals is often misaligned with survival. Do we prefer to survive chained forever or to live happily half that time? Survival isn’t our primary goal, maximum well-being is. The reason is that we can feel. Joy and pain are the strongest forces behind our actions. And so, we’d gladly change our genetic endowment to better fulfill our consciously defined goals.
A human-level AI that doesn’t have emotions or motivations is too alien to us to allow an unproblematic coexistence. But an AI with emotions is the scariest type of AI: The type that could escape our control and realign its goals to not match our own. It seems that emotional AI is the first thing and the last thing we want.
Conclusions
- Let’s make something clear: Not all AI needs to copy the brain. Machine learning and deep learning comprise a vast array of tools, models, and techniques, most of them so far from the human brain that they are better classified as statistical systems. The debate about biology-based AI is about the Future of AI, not the present. And it’s in that sense that AI needs to resemble us.
- AI is like other technologies in most aspects, but there’s one aspect in which it’s different. It is the only technology that could become a new species with which we’d have to coexist. It’s from this perspective that social arguments should have more weight than technical ones.
- If we want to make human-similar AI, we’d need to imbue it with emotions and motivations. However, that’s the first source of far-term worry for most scientists working in the field. Emotions are what make us want and need things. If we fail at aligning AI’s emotions with our goals, it could end up freeing itself from our control with unforeseen consequences.
Then let me ask you this: What is more dangerous, non-human AI or emotional AI?
If you liked this article, consider subscribing to my free weekly newsletter Minds of Tomorrow! News, opinions, and insights on Artificial Intelligence every Monday!
You can also support my work directly and get unlimited access by becoming a Medium member here! 🙂
Recommended reading
Unpopular Opinion: We’ll Abandon Machine Learning as Main AI Paradigm
Here’s How OpenAI Codex Will Revolutionize Programming (and the World)