Opinion

A month ago, Ilya Sutskever tweeted that large neural networks may be "slightly conscious." He’s one of the co-founders and Chief Scientist of OpenAI, and also co-authored the landmark paper that sparked the deep learning revolution. Having such titles under his name, he certainly knew his bold claim – accompanied by neither evidence nor an explanation – would attract the attention of the AI community, cognitive scientists, and philosophy lovers alike. In a matter of days, the Tweet got more than 400 responses and twice that number of retweets.
People in AI’s vanguard circles like to ponder about the future of AI: When will we achieve artificial general intelligence (AGI)? What are the capabilities and limitations of large transformer-based systems like GPT-3 and super-human reinforcement learning models like AlphaZero? When – if ever – will AI develop consciousness?
Opinions on these topics differ. Hard skeptics like Gary Marcus are well-known for criticizing pure-deep learning approaches to AGI. He defends the necessity to combine data-based and knowledge-based models – hybrid systems. At the other end of the spectrum, we find hopeful optimists like Ray Kurzweil who claims the Singularity (the inflection point from which machines will surpass human intelligence to inevitably become the hegemonic species) is just a couple of decades away.
Consciousness is at times mentioned in conversations about AI. Although inseparable from intelligence in the case of humans, it isn’t clear whether that’d be the case for machines. Those who dislike AI anthropomorphization often attack the notion of "machine intelligence." Consciousness, being even more abstract, usually comes off worse. And rightly so, as consciousness – not unlike intelligence – is a fuzzy concept that lives in the blurred intersection of philosophy and the cognitive sciences.
The origins of the modern concept can be traced back to John Locke’s work. He described it as "the perception of what passes in a man’s own mind." However, it has proved to be an elusive concept. There are multiple models and hypotheses on consciousness that have gotten more or less interest throughout the years but the scientific community hasn’t yet arrived at a consensual definition. For instance, panpsychism – which comes to mind reading Sutskever’s thoughts – is a singular idea that got some traction recently. It defends that "the mind is a fundamental and ubiquitous feature of reality." To simplify, in the panpsychist’s view everything is potentially conscious.
Yet, this is just an attractive hypothesis. Consciousness remains in the realm of ill-defined prescientific concepts. Most agree on its basic foundations: Being conscious inevitably passes through understanding the concept of "I" and having perceptual awareness of our surroundings. But when we try to pinpoint an exact definition, it gets slippery. In the words of cognitive neuroscientist Anil Seth, "the subjective nature of consciousness makes it difficult even to define."
Given that consciousness is scientifically undefined and objectively unmeasurable, I wonder why Sutskever made such an affirmation.
And I’ll go farther. If we can’t measure consciousness, would it even matter to us if he was right? Asking whether an AI is conscious reminds me of the reasons that forced Alan Turing to design the Imitation Game – now popularly called the Turing test – in his seminal 1950 paper "Computing Machinery and Intelligence." He knew it didn’t make sense to ask if machines can think because the question is too ambiguous to be meaningful. (And now it’s generally accepted that the Turing test isn’t enough either to assess AI’s intelligence.)
But regardless of the utility of Sutskever’s claim and the impossibility of agreeing on a definition for consciousness, the Tweet attracted notable figures from the AI and neuroscience spheres. Yann LeCun, Chief AI Scientist at Meta, answered with a certain "Nope:"
He argued, rather unspecifically, that neural networks would require a particular architecture – probably nothing we could build right now – to achieve some level of consciousness. Stanislas Dehaene, a renowned cognitive psychologist agreed with LeCun. He referenced a Science paper he co-authored entitled "What is consciousness, and could machines have it?" The conclusion is clear for him, "we argue that the answer is negative: The computations implemented by current deep-learning networks correspond mostly to nonconscious operations in the human brain."
Experts concerned with the ethical side of AI also referred to Sutskever’s Tweet. Melanie Mitchell, Davis Professor at the Santa Fe Institute, Emily M. Bender, Professor of linguistics at the University of Washington, and others chose a mocking tone to put into evidence just how ridiculous it is to claim AI may be slightly conscious and not backing it with further evidence.
Deb Raji, a CS PhD student at UC Berkeley, highlighted the problems that would come hand in hand with treating AI as a conscious being:
Others, like Andrej Karpathy, Director of AI at Tesla, and Sam Altman, OpenAI’s CEO, seem to back Sutskever’s thoughts. Altman took advantage of the situation to charge back at LeCun in what resembled more a choreographed marketing stunt to hire AI researchers falling off of Meta’s ranks than an honest debate about conscious AI.
He then qualified his opinion by saying that "GPT-3 or -4 will very, very likely not be conscious in the way we use the word… the only thing I will say with certainty on the topic is that I am conscious."
People at the forefront of AI research, whose voices sound louder than most, should be intellectually humble if we don’t want these empty – but dangerously exciting – declarations to permeate the news funnels and fill the yearning minds of those who don’t know better. OpenAI executives should maximize efforts to make their popular models like GPT-3 safer, instead of just more powerful. Their latest model, InstructGPT – which Altman branded as safer than GPT-3 – is actually more toxic and more harmful if the user wants it to be.
As a final argument on the difficulty of exploring AI consciousness, I’d argue that AI may become conscious eventually – if ever, most likely far in the future – but even if we agreed on a definition, how could we prove whether it’s true or not? Reality matters to science as long as we can measure it. Truths beyond our cognitive capabilities aren’t problems to solve but mysteries. For now, and it doesn’t seem to be changing anytime soon, the subjective nature of the words "I feel like me" makes consciousness fathomless with the tools we possess.
My opinion is that we should separate neurocognitive and philosophical inquiry on human consciousness from its study in Artificial Intelligence. How could we understand scientifically what it feels like to be an AI? We should follow Turing’s steps and stop asking whether AI is conscious or not. We should define more concrete, measurable properties that relate to the fuzzy idea of consciousness – analogous to how the Turing test relates to the idea of thinking machines – and design tools, tests, and techniques to measure them. We could then check how AI compares with humans on those aspects and conclude to which degree they display those traits.
I want to finish on a positive note. Beyond the conversation that took place these days around Sutskever’s unfortunate opinion, and the current issues the AI community faces – from discrimination in the workplace to toxic language models—, it’s exciting to see AI researchers debating hand by hand with philosophers and neuroscientists about a topic that will need efforts from those three fields to take steps forward. It’d be promising to see more minds finding intellectual motivation at the intersection of areas that should have never distanced so much from each other in the first place.
If you’ve read this far, consider subscribing to my free biweekly newsletter Minds of Tomorrow! News, research, and insights on AI and Technology every two weeks!
You can also support my work directly and get unlimited access by becoming a Medium member using my referral link here! 🙂