The mathematician, philosopher, and number religion leader Pythagoras

Are Our Thoughts Really Dot Products?

How AI Research Revived Pythagoreanism

Thomas Nield
Towards Data Science
9 min readJan 8, 2019

--

Recently, I wrote an article about how deep learning might be hitting its limitations and posed the possibility of another AI winter. I closed that article with a question about whether AI’s limitations are defined just as much by philosophy as it is science. This article is a continuation of that topic.

The reason I wrote this article is to spur a discussion on why despite so many AI winters and failures, people are still sinking costs to pursue artificial general intelligence. I am presenting a very high-level, non-technical argument that maybe belief systems are driving people’s adamacy on what is possible and not just scientific research.

This article is not meant to be an academic paper that meticulously (and boringly) addresses every technicality and definition in the gap between philosophy and science. Rather it is some light-hearted musings that make some “common sense” observations. While we can nitpick definitions and semantics all day, or debate whether replicated behavior = intelligence, let’s just put all that aside. So please don’t take yourself too seriously while reading this.

A Brief History of Pythagoras

2500 or so years ago, there was a philosopher and mathematician in South Italy named Pythagoras. You may have heard of him, but the story behind the man who studied triangles and math theorems is much wilder than you probably think.

Pythagoras ran a number-worshipping cult, and his followers were called mathematikoi. Pythagoras told his followers to pray to numbers, particularly sacred ones like 1, 7, 8, and 10. After all, “1” is the building block of the entire universe. For some reason, the number “10” (called the Tetractys) was the most holy. It was so holy, in fact, they made sacrifices to it every time a theorem was discovered. “Bless us, divine number!” they prayed to the number 10. “Thou who generated gods and men!”

According to Pythagoras, the universe cannot exist without numbers, and therefore numbers hold the meaning of life and existence. More specifically, the idea rational numbers built the universe was sacred and unquestionable. Apart from enabling volume, space, and everything physical, rational numbers also enabled art and beauty especially in music. So fervent was this sacred belief, legend says Pythagoras drowned a man for proving irrational numbers existed.

Are Our Thoughts Really Dot Products?

Multiplying matrices summons demons, as alluded by Elon Musk

Fast forward to today. It may not be obvious to most people, but “artificial intelligence” is nothing more than some math formulas cleverly put together. Many researchers hope to use such formulas to replicate human intelligence on a machine. Now you may defend this idea and say “Cannot a math formula define intelligence, thoughts, behaviors, and emotions?” See what you just did there? No fava beans for you.

Notice how even though we barely have an idea how the brain works especially when it comes to intelligence and consciousness, even the most educated people (scientists, journalists, etc) are quick to suggest an idea without evidence. Perhaps you find mathematics so convincing as a way to explain the world’s phenomena, you are almost certain emotions and intelligence can be modeled mathematically too. Is this not the natural human tendency to react to the unknown with a philosophy or worldview? Perhaps this is the very nature of hypotheses and theories. Before you know it, we sell the neural network model (loosely inspired by neurons in the brain) as a carbon copy of biology.

But again, you don’t know if this is true.

Is every thought, feeling, or even behavior we have really a bunch of numbers being multiplied and added in linear algebra fashion? Are our brains, in fact, simply a neural network doing dot products all day? Reducing our consciousness to a matrix of numbers (or any mathematical model) is certainly Pythagorean. If everything is numbers, then so is our consciousness. Perhaps this is why so many scientists believe general artificial intelligence is possible, as being human is no different than being a computer. It may also be why people are quick to anthropomorphize chess algorithms.

21st Century Pythagoreanism

For this reason I believe Pythagoreanism is alive and well, and the sensationalism of AI research is rooted in it. You might say “Well I get Pythagorean philosophy says that 'everything is numbers' and by definition that includes our thoughts and behaviors. And sure, maybe AI research unknowingly clings to this philosophy. But what about number worship? Are you really going to suggest that happens today?”

Hold my beer.

In Silicon Valley, a former Google/Uber executive started an AI-worshipping church called Way of the Future. According to documents filed with the IRS, the religious nonprofit states its mission is “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.” You might justifiably say this community exists on the extremes of society, but we cannot dismiss the high profile people and companies involved and how the church seeks to entrench itself into the scientific community. Here are some excerpts from their mission statements:

Way of the Future (WOTF) is about creating a peaceful and respectful transition of who is in charge of the planet from people to people + “machines”. Given that technology will “relatively soon” be able to surpass human abilities, we want to help educate people about this exciting future and prepare a smooth transition. Help us spread the word that progress shouldn’t be feared (or even worse locked up/caged).

Alright, never mind the fact sensationalism about near-term AI capabilities was alive and kicking in the 1960’s. But let’s keep reading:

We believe that intelligence is not rooted in biology. While biology has evolved one type of intelligence, there is nothing inherently specific about biology that causes intelligence. Eventually, we will be able to recreate it without using biology and its limitations. From there we will be able to scale it to beyond what we can do using (our) biological limits (such as computing frequency, slowness and accuracy of data copy and communication, etc).

Okay, for all this talk about science and objectivity… there is so much Pythagorean philosophy filling in the gaps. A belief that intelligence is not biological but rather mathematical (because that is what AI is) is hardly proven and yet labels itself as hard science, just like Pythagoras claimed his beliefs were. And how can the uproven claim that “intelligence is not rooted in biology” stand up to the fact intelligence (even by a layman’s definition) has only existed in biology? I am not refuting this claim, but darn it we have had a hard and expensive time trying to prove it over the past 60 years, and with no success. At this point, shouldn’t we be entertaining opposing theories a little more?

I visualize a time when we will be to robots what dogs are to humans, and I’m rooting for the machines.

— Claude Shannon

Regardless, let’s just assume this group is not reflective of the general AI community (How many of you are going to church to worship an AI overlord anyway?) There are still a lot of journalists, researchers, and a general public who may not share these sentiments in a religious sense, but they are still influenced by them. Many people worry robots will take their blue and white collar jobs, or worse create a SkyNet-like takeover of society. Other folks worry we will become cyborgs in a figurative or literal sense and AI will dehumanize humanity.

The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.

— Edsger Dijkstra

Science fiction movies definitely have not helped imaginations stay tempered within reality. But still, Silicon Valley executives and researchers insist this can happen in the near future and continue to promote exaggerated claims about AI capabilities. They could simply be doing this as a publicity stunt to attract media attention and VC funding, but I think many sincerely believe it. Why?

With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.

— Elon Musk

This sensationalism, fear, and even worship of artificial intelligence is 21st century Pythagoreanism. In layman’s terms, it is completely based on a theory that intelligence, thoughts, and emotions are nothing more than mathematical models. If this theory indeed holds up to be true, then of course a neural network could replicate human intelligence. But is human intelligence really that simple to model? Or should we acknowledge human intelligence is not understood enough to make this possible?

Pythagoras Says Everything is Numbers. So What?

So, everything is numbers in the domain of artificial intelligence and in Pythagorean philosophy. Why does this matter?

I am not saying Pythagoreanism is wrong, but rather much of the scientific community fails to acknowledge they are driven just as much by a philosophy as they are science. One must be careful when they make scientific claims without acknowledging their own worldviews, because everyone lives by a philosophy whether they realize it or not. Philosophy forces us to reason about our existence, how we react to the unknown, and disclose our own biases.

To presume how human intelligence works quickly crosses a fine line. Failing to make this distinction between philosophy and science is going to hurt the reputation of the scientific community. Before millions of dollars are invested and sunk into an AI startup, it might be a good idea to vet out which claims about AI capability are merely philosophical versus absolute. Time and time again, ambitious AI research has a poor track record when it comes to credibility and delivering what they say is possible. I think a lack of philosophical disclosure is largely responsible for this.

What If Everything Isn’t Numbers?

What if the world is not structured and harmonious but rather messy and chaotic, and we merely use math just to loosely make sense of it? What if consciousness, intelligence, and emotions are not numbers and math functions? What if the human (and even animal) mind is infinitely more complex in ways we cannot model? Humans and animals are after all irrational and chaotic.

If we do not have discussions to these questions (and do it publicly), we are kidding ourselves when we make assertions about the unknown. And we should not entertain and accept just one philosophy. We should be able to discuss them all.

Failing to make this distinction between philosophy and science is going to hurt the reputation of the scientific community.

If you do not buy into this philosophy of 21st century Pythagoreanism, then the best you can strive for is have AI “simulate” actions that give the illusion it has sentiments and thoughts. A translation program does not understand Chinese. It “simulates” the illusion of understanding Chinese by finding probabilistic patterns. The algorithm does not understand what it is doing, know it is doing it, or much less why it is doing it. In a non-Pythagorean world, AI is like a magician passing off tricks as magic.

What if we got it all wrong, and it is biology that will always be intelligently superior and it is technology that is limited? We are trying to emulate biology after all, and with great frustration.

Here’s what I believe, and I am not a Luddite saying we shouldn’t try to make “smarter” machines. We need to set out to achieve small, reasonable goals focused on diverse and specific sets of problems… with equally diverse and specific solutions. In other words, we can accept it is okay to create algorithms that are great at one task, rather than spin our wheels creating an algorithm that does everything (“jack of all trades, master of none” and all that). If it is not to prevent AI winters, sunk investments, and wasted careers, let’s do it to make AI research more pragmatic and less sensationalized. We have seen enough AI winters to know better by now.

EDIT:

I have heard some feedback on this article, many with compelling arguments that suggest I should perhaps clarify.

I will admit that only suggesting "dot products" as a way to model thoughts is a little reductionist. Really it can be any math model invented or yet to be discovered.

That being said, this article is not meant to be argumentive but rather rhetorical and spark discussion, and comment on the parallels between Pythagorean philosophy and AI sensationalism. There are many philosophical discussions on AI. I just find it problematic they are not discussed in the open with the public and media enough. I am also surprised nobody has ever compared AI sensationalism to Pythagoreanism. If you find this wrong then please say why.

It is anyone’s prerogative to put resources and costs into research that may become a dead end in the name of science, and I wish them success. However, the AI winters and failures of past decades does raise questions if we are doing it all again, and we are not learning from the past. The Pythagorean perception is simply one explanation why strong AI is marketed as a near term possibility, even though we have no models that remotely come close yet. If intelligence is indeed just numbers and functions, then it is just a matter of time before finding the right model.

Related Articles

--

--

Author of Essential Math for Data Science (O’Reilly) and Getting Started with SQL (O’Reilly)