On June 11, Google engineer Blake Lemoine released a transcript of his conversation with LaMDA. LaMDA is Google’s machine-learning model that mimics human speech and thought, and Lemoine believed it was sentient.
While the consensus remains that LaMDA has a way to go before attaining sentience, there is a lot that we can learn from it.
Exploring how AI models think could be the key to the secrets of consciousness, allowing us to explain why it exists in our minds.
For most of its history, consciousness had been regarded as a field of study better left to the realm of philosophy. With its spiritual nature, it seemed impossible that neuroscience and technology would tackle such a topic. But, in recent decades, we’ve seen just that.
Researchers now recognize consciousness as something entangled with the physical brain itself, shifting from its metaphysical origins. We have shifted to a primarily scientific approach to consciousness and we now even have ways to measure consciousness in labs.
What neuroscience aims to tackle has been dubbed the Easy problem of consciousness. It asks what parts of the brain cause consciousness and what changes in those parts can affect it.
But there is another issue known as the Hard problem of consciousness. Philosopher Ned Block introduced this problem and questions why consciousness is linked to our brain at all? Why is it that we have consciousness as part of our human experience? What is it about the structure and processes of our brain that induces a complex experience of the world?
These are questions that we need a non-human to answer.
Understanding consciousness with AI
When building Artificial Intelligence models, researchers have been aiming to make them self-aware. They have been working for years to create systems that can process information and think like humans. But no matter how human-like we make a system, consciousness always seems to be a hurdle we have yet to overcome.
To answer the hard problem of consciousness, we can’t start with the human being anymore. There are too many variables to conclude much about self-awareness and consciousness.
What AI allows us to do is start from the ground up. We look at robots of various kinds and analyze their behaviour, looking at behaviour we characterize as conscious-like. This has brought with it a trend of research-based Philosophy which relies on evidence to provide answers to philosophical questions.
But more than understanding what consciousness is, AI is helping us understand what consciousness is not.
Disproving consciousness in AI
Upon hearing Lemoine’s claim that LaMDA was sentient, many fellow researchers rightfully pointed out that LaMDA’s entire goal was to implicitly appear conscious.
Google engineers created LaMDA to emulate human conversation, a great deal of which deals with topics of sentience. From conversations about the weather to details about relationships and stories, human communication is built around consciousness and the ability to feel sensations and emotions. So it comes as no surprise that this is a quality LaMDA has prioritized and excels at.
The discourse around LaMDA shows us that consciousness is more than communication. And this is further confirmed by older neuroscientific experiments such as the split-brain procedure.
In people experiencing split-brain, the connection between the left and right hemispheres of the brain is severed, leading to the individual’s right and left parts of their body acting independently. Most of the time, these individuals act normal, but under certain experimental conditions, it becomes clear that only the left half of the brain can communicate via speech, while the right is silent and struggles to communicate.
Does this mean that only half of that person’s brain is conscious? Obviously not, and so consciousness isn’t just about communication.
As we iterate through artificial intelligence models and make them better and better, I think we will come to a point where we will have to make a hard decision. And we will have two choices:
Admit that AI has become conscious and self-aware.
If we were to admit this, we would need to contend with the fact that consciousness is no longer something unique to humans. There is now an inorganic life form that shares this planet with us and is capable of all the cognition and consciousness that we are.
Once we accept this, we can understand a great deal about ourselves. Being able to replicate something is the first step in understanding it, and understanding what causes consciousness to arise in an AI model will teach us far more about our own psyche than individual tests and studies would. It would be a boon to neuroscience, philosophy, Psychology, computer science, and any other number of fields that deal with questions of consciousness in the modern age.
Deny that the AI has become conscious and self-aware
But if we were to deny that AI was conscious, we might need to deny that humans are conscious as well. There will be a point at which AI can do much of what human minds are capable of, and if we deny that a characteristic bots exhibit is evidence of consciousness, we would have to deny that that same characteristic is evidence of consciousness in humans either.
If an artificial intelligence model develops the ability to reflect on its own thoughts and have self-cognition and we say that that isn’t proof of consciousness, then we can no longer say that self-reflection is proof of our consciousness.
Eventually, bots will reach the point where they have so many of our capabilities that I think anyone would be hard pressed to argue that AI models aren’t conscious without compromising what we think makes us human.
Artificial intelligence and its development are bringing on philosophical questions far sooner than anticipated. And these are questions we will need to answer. Until then, we learn what we can from these incredible inventions and developments.
If you enjoy my work and want to read more, consider joining Medium through my referral link. You get to read an unlimited amount of stories each month and a small portion of your membership (at no extra cost to you) supports me.