Are AI ‘Thinking Machines’ Really Thinking?

Mark Ryan
Towards Data Science
6 min readNov 1, 2019

--

Since the development of the first universal computers scientists have postulated the existence of an artificial consciousness; a constructed system that can mirror the complex interactions that take place within the human brain. While some public figures are openly terrified about the coming cyborg apocalypse, for most people artificial intelligence these days refers to tools and applications that can help us get our work done faster, rather than androids and artificial people. AI is now predominantly considered as a narrow use of a particular type of technology, distinct from artificial general intelligence (AGI), a much broader concept that encompasses synthetic consciousness.

Elon Musk: right to be afraid?

Considering the growth in the field of AI over the past decade or so, and the massive ongoing investment, it is worth exploring just how far along the path we have travelled towards Terminators, replicants and R2-D2, and the problems that have presented themselves. Many scientists and thinkers believe that AGI is a scientific inevitability based on the concept of universality, while others suggest that there are ontological physical limitations that prevent the recreation of consciousness. The disagreement is effectively a philosophical one; there is no empirical evidence that comprehensively backs either hypothesis. What is clear is that scientists have been extremely effective at recreating, and even improving upon certain human skills, and entirely unsuccessful at reproducing others.

‘Artoo’ even had a sense of humour.

The idea of artificial, synthetic consciousness that could resemble a human-like intelligence raises mind-boggling ethical and moral questions. This is a massive and fascinating topic that I will not address here. Instead, I will consider the practical barriers to the development of such an entity, and their philosophical implications.

Artificial Intelligence is one of the leading development trends in tech research today, to the extent that it infiltrates almost all other technologies. AI will continue to redefine how businesses operate as advanced analytics and automation become more efficient and reliable, meaning that companies that fail to adapt will risk getting left behind. New AI technologies like those found in autonomous cars, or generative adversarial networks that can construct entirely new original novelties, could lead to the development of previously unimaginable applications and ideas.

These advancements are based on the core idea of ‘thinking machines’; software that can replicate certain cognitive functions of the human brain. There is no single definition of AI (even the term ‘intelligence’ is subjective), but it is most often understood to refer to applications that can perceive their environments to accomplish their programmable objectives. Machines that can learn, i.e. develop understanding beyond what has been hard-coded are amongst the largest sub-set of developments in AI. Machine learning or deep learning algorithms are often based on artificial neural networks. These are computing systems that are specifically modelled on how the human brain works.

We refer to these as ‘thinking machines’ even though they don’t think in the way that humans do. They perceive their environments, but they are not aware of them. Computers are furnished with memory, just like conscious beings are, and modern AI systems can anticipate or predict based on informational input (this is one of the ways an AI can construct a predictive model, for example for business or in healthcare). These capabilities are all thought to be necessary aspects of consciousness, but a machine is only capable of implementing them in extremely narrow forms. AI is inflexible and incapable of anticipating or remembering outside of its definite, limited programming. For example, a highly advanced machine-learning algorithm designed to make predictions in road-traffic patterns cannot repurpose its intelligence to have a conversation or play a game.

Machines can be programmed to learn how to play chess but would be stumped if presented with your accounts.

Facilitating flexibility in AI in this way appears to represent a significant challenge. However, it may not be the most challenging aspect of consciousness to recreate. The concept of subjective experience, that is, internal and often unexplainable mental states and reactions, is often thought of as the ‘big question’ of consciousness by both psychotherapists and philosophers. Thomas Nagel wrote:

“…an organism has conscious mental states if and only if there is something that it is like to be that organism — something it is like for that organism.”

In other words, it is not enough for a machine to think — it must know that it is thinking and have a sense of its existence apart from its thoughts. Descartes famously said “I think, therefore I am”, to illustrate that he had a mind, as distinct from a physical thinking brain. This idea is often linked with the concept of qualia — the subjective interpretation of sensations, neither explainable nor predictable. Philosophers often might describe the ‘ouchiness’ of the sensation of pain or the innate ‘redness’ that we experience when we perceive the colour red. We can describe scientifically what happens when light rays make contact with the cones in our eyes, and we can compare it to other similar colours we’ve seen, but there is no way for two people to objectively compare their personal experience of red. This concept is inherently problematic for scientists, and they mostly tend to ignore it. It is, however, one of many intangible, indefinable abstractions that undoubtedly exist, both within the human mind and exclusive of it, that cannot be defined scientifically.

Red: perplexing philosophers for generations.

Abstract concepts like creativity, human desires, social cognition (or shared understanding), meaning and free will are necessary considerations of any conscious being but have proven themselves to be extremely difficult to formalize mathematically. This makes them impossible to translate into computer code, and therefore impossible to translate to a machine. They cannot be explained or recreated using machine-learning or deep-learning algorithms; no matter how large the data set, the software will not be able to understand or acquire uniquely human traits like empathy or sensitivity. To do this it would have to be programmed with in-built models that described what these concepts represented, in terms that the program could understand.

The development of correct knowledge structures is one area where AI researchers are improving AI efficiency to further complement increasingly massive data-sets. However, scientists are a long way off accurately rendering intangible, emotionally advanced phenomena as formal instances.

The potential capacity of modern computing systems can be summed up with the (perhaps simplistic) adage; “If you can understand a task, you can program it”. On one hand, this suggests a vast potential of applications derived from and inspired by the entire breadth of human understanding. In essence, virtually everything knowable and univocal can be formalised mathematically and programmed. On the other hand, it very naturally limits our explorations to unambiguous concepts rooted in the material, rather than the realms of metaphysics and philosophy.

All views are my own and not shared by Oracle. Please feel free to connect with me on LinkedIn

--

--