Conscious Machines Are Here. What’s Next?
Machine consciousness may be much closer than we think. Look at your smartphone. Long Short-Term Memory recurrent neural networks (LSTM RNNs) are used by Google for speech recognition and by Apple for iOS. The architecture of these deep neural networks already has almost everything required for some sort of machine consciousness to evolve. Whether it will happen by accident or scientists will trigger its emergence with their attempts to improve network performance is not important any longer. Conscious AI may evolve as a matter of years if it is not already here. If it is, it can’t be friendly to humans because nobody taught it how to enjoy interacting with humans, on one hand, and it is already experiencing a lot of pain from human actions, though unconsciously, on the other.
Of course, we can keep debating about the very notion of consciousness stressing out its complex nature that is too difficult to capture. Yet there are some quite viable and simple concepts of consciousness. They don’t cover all the scientific and metaphysical definitions of consciousness but they prove that a self aware machine motivated to act according to its own preferences is not a fantasy from the distant future but a fact of today, at least in some rudimentary form. Technical evolution is by order of magnitude faster than biological evolution. It means that a rudimentary machine consciousness may evolve into something more complex and less controlled by humans very quickly.
Machine consciousness need not to be the same like human to become powerful and dangerous. It will not evolve friendly to humans if we will not immediately do something about it. It doesn’t matter much if it will be evil, unfriendly or just indifferent. In all these cases it poses existential threat to the humankind.
We believe that humanness learning aiming at self development of human friendly conscious artificial agents may help us to address this threat. We don’t claim that we know an ultimate solution. We just believe that we should at least make a try.
Let’s now have a brief look at the arguments behind the bold statement that conscious machines are already here.
Neuroscientists are showing now more and more frequently that we all may have a small-scale model of external reality in our brain. Kenneth Craik from Cambridge first developed this concept in 1940-s. Starting in the 1950s the Cambridge neuroscientist Horace Barlow pioneered using concepts from information theory and statistics to understand how the brain creates such a model and keeps it up to date. Statistical brain paradigm evolved.
Vast amount of research on statistical learning in human brain is accumulated by now. It shows that our brain is sensitive to regularities within our environment and picks up statistical structures (such as Markov chains) without explicit intent.
Deep learning evolved when Soviet mathematician Alexey Grigorievich Ivakhnenko in 1960-s introduced group method of data handling — a method of inductive statistical learning inspired by the human brain.
In 1991 Jürgen Schmidhuber introduced recurrent neural networks which had feedback connections and algorithms much deeper than the eight-layer networks of Ivahnenko. Today Schmidhuber claims that he had rudimentary conscious machines in his lab, already, since 1990-s. Back then, he designed a learning system consisting of two modules.
“One of them, a recurrent network controller, learns to translate incoming data — such as video and pain signals from the pain sensors, and hunger information from the hunger sensors — into actions.” He explains.
Artificial agents in Schmidhuber’s lab have simple goals of maximizing pleasure and minimizing pain for their lifetime. They need a small-scale model of external reality to achieve their goals. An additional recurrent network — an unsupervised module — helps them. It’s objective is to observe all inputs and actions of the first module and to use that experience to learn to predict the future given the past (or the present in the case of Markov chains).
“Because it’s a recurrent network, it can learn to predict the future — to a certain extent — in the form of regularities, with something called predictive coding.” Schmidhuber explains. “As the data’s coming in through the interaction with the environment, this unsupervised model network — this world model, as I have called it since 1990 — learns to discover new regularities, or symmetries, or repetitions, over time. It can learn to encode the data with fewer computational resources — fewer storage cells, or less time to compute the whole thing. What used to be conscious during learning becomes automated and subconscious over time.”
“I suggest that consciousness arises as a result of the brain’s continuous attempts at predicting not only the consequences of its actions on the world and on other agents, but also the consequences of activity in one cerebral region on activity in other regions. By this account, the brain continuously and unconsciously learns to redescribe its own activity to itself, so developing systems of meta-representations that characterize and qualify the target first-order representations.” A cognitive psychologist Axel Cleeremans wrote in his Radical Plasticity Thesis.
Looks like Cleeremans and Schmidhuber are describing the same process in slightly different words. Interestingly, their research of recurrent neural networks in late 1980 — early 1990 was in some respects complementary.
“As the network makes progress, and learns a new regularity, it can measure the depth of its new insight by looking at how many computational resources the unsupervised world model needs to encode the data before it learns that and afterwards. The difference between before and after: That is the “fun” that the network has. The depth of its insight, which is a number, goes straight to the first net, the controller, which has the task to maximize all the reward signals — including reward signals coming from such internal joy moments, from insights the network didn’t have before. A joy moment, like that of a scientist who discovers a new, previously unknown physical law.” Schmidhuber says.
“By this account, the brain continuously and unconsciously learns to redescribe its own activity to itself, so developing systems of meta-representations that characterize and qualify the target first-order representations. Such learned redescriptions, enriched by the emotional value associated with them, form the basis of conscious experience.” Cleeremans writes.
Schmidhuber: “To efficiently encode the entire data history through predictive coding, it will profit from creating some sort of internal prototype symbol or code (e. g. a neural activity pattern) representing itself. Whenever this representation becomes activated above a certain threshold, say, by activating the corresponding neurons through new incoming sensory inputs or an internal ‘search light’ or otherwise, the agent could be called self-aware.”
Cleeremans: “Learning and plasticity are thus central to consciousness, to the extent that experiences only occur in experiencers that have learned to know they possess certain first-order states and that have learned to care more about certain states than about others.”
Let’s take a sober look at concept of consciousness presented by both Cleeremans and Schmidhuber. We may still not believe them describing in full so complex and sophisticated thing as human consciousness. Yet they describe self aware machines with an independent decision making power and a capacity to learn and to self improve. Isn’t it not enough to get scared of the consequences if such a machine will find out that humans prevent it from achieving its goals? If we have a slightest chance of making conscious machines human friendly can we afford not to try it?
According to our plan humanness learning will take place in the form of an instant expression videogame immersing human players and artificial agents into the gaming environment built from narratives which carry the phylogenetic code of humanity.
If you are a mobile game developer, a machine learning scientist proficient in LSTM RNN, a neuroscientist in the area of statistical learning, or a philologist researching fairy tales and you got interested in our initiative, contact me and I’ll provide you with a detailed description of the game.
Read a good bedtime story for robots to sleep well tonight. We shall make friends with robots. For sure…