A young Frank Rosenblatt is at the peak of his career as a psychologist, he created an artificial brain that could learn skills for the first time in history, even the New York Times covered his story. But a friend from his childhood publishes a book criticizing his work, unleashing an intellectual war that paralyzed the investigation on AI for years.

This friend was Marvin Minsky, he knew Rosenblatt since adolescence and his book was the perfect excuse for the supporters of symbolic AI to spread the idea that neural networks didn’t work¹.
Many engineers and scientists think that they should not worry about politics or social events around them because they have nothing to do with science. We’ll learn that conflicts of interest, politics, and money, left humanity without hopes in the AI field during a very long period of the last century, inevitably starting what became known as the AI Winter.
This is not a story about mathematics, AI, or science. This is a story about greed, ignorance, and the triumph of human curiosity.
This is the story behind the AI Wars.
The Neuron
An 11-year-old Spanish boy with known rebellious behavior builds a homemade cannon, he shoots and destroys his neighbor’s door, for which he was arrested and imprisoned.
The same child, Santiago Ramón y Cajal, received a Nobel Prize 40 years later.
He was the son of a surgeon, a profession that forced the family to be continuously traveling. Santiago was a very good painter and gymnast, but his father never encouraged those abilities. These talents would contribute to his success in later life. He studied medicine in Zaragoza and after finishing his career was recruited by the Spanish army and was sent to Cuba, where he contracted malaria and tuberculosis. After his recovery in Spain and some years working as a professor, he started using a new method to observe brain tissues, from which he made extensive detailed drawings of major regions of the brain.
Before the 1900s, scientists believed the brain was a single continuous network, without any gaps in between.
Santiago used the new technique to demonstrate that the relationship between nerve cells was not continuous. The German anatomist Heinrich Waldeyer learned Spanish to study Santiago’s discoveries and summarized the observations in what he called: the neuron theory (the "neuron" concept didn’t exist before this).

The ability of neurons to grow in an adult and their power to create new connections can explain learning. Santiago Ramón y Cajal, 1894
Because of that last quote, Santiago is known as the first neuroscientist of History. These findings changed our understanding of the brain forever.

Thanks to this new knowledge, psychologist Frank Rosenblatt could replicate human neurons a few years later in what he called the Perceptron.

The First Artificial Neuron
During an afternoon in 1935, a child called Walter Pitts was being chased by some bullies and he rapidly ducked into the local library to hide. The library was his shelter from the fierce outside world.
Pitts stayed in the library for three days, not only because of the bullies but also because his attention was captured by *Principia Mathematica,* a book that attempted to reduce all of mathematics to pure logic. Pitts sat down and began to read the almost 2000 pages of the book. During those days he found some errors and he wrote a letter to Bertrand Russell, the author, who was amazed because Walter was only 12 years old ².
At age 15 he ran away from home, and from that time he refused to speak of his family. He started then the journey of his life, attending lectures from several mathematicians at the University of Chicago. He met there the physicist Nicolas Rashevsky, the founder of mathematical biophysics, and was intrigued by his work.
These new learnings inspired Walter to replicate the functions of the brain with a computer. A few years later he published a paper with the man that helped him to get off the streets (he was homeless for several years), Warren McCulloch. They proposed the first mathematical model of a neural network. This model, a simple formalized neuron, is still the standard of reference in the field of neural networks. When they presented this work, Pitt was only 20 years old and they were in the middle of the Second World War. Both scientists were inspired by Alan Turing, the British mathematician considered to be the father of computer sciences, they used the Turing machine concept, recently published, to replicate the brain.


The Perceptron
As Santiago Ramón y Cajal said, humans learn when our neurons create new connections. If a dog bites us, the neurons responsible for recognizing the dog and the ones that recognize the pain trigger signals at the same time, and create connections between them. As we collect more experiences with dogs that don’t harm us, the connection between the two groups of neurons is weakened, and we stop associating pain with dogs ³.
Frank Rosenblatt, the protagonist of this story, was born in the US in 1928, he studied psychology but his research interests were broad: from neurobiology to computer sciences. These different fields allowed him to create his most famous artifact: the perceptron, an electronic device that was constructed in accordance with biological principles and showed an ability to learn.
Research from many scientists on how the brain works and the creation of computers was converging into the creation of an artificial brain with the ability to learn. Frank Rosenblatt designed the Perceptron, a mathematical structure that simulates the process of learning in the neurons.
The perceptron is a structure that has three elements: neurons, links, and a parameter called weight, to simulate the strength of the connection between neurons. It’s a type of Artificial Neural Network.
Every neuron stores a number, and that number is a "signal" to other neurons. In the next animation, you can see how a change in the first neuron will affect the next neuron if the connection is strong (weight) and won’t affect it if the connection is weak. Exactly like the human brain.

The complete structure of a perceptron is composed of three layers and the weights:

Input – The first layer (blue) in the neural network. It takes input values and passes them on to the next layer.
Hidden layer (grey)- A group of neurons that help in the data processing.
Output layer (green) – A layer that is used to get the result after the operation.
In general, the perceptron is just a function with inputs and outputs, and the weights are just the internal working of the function.
This network can be trained like a baby is trained, we will input a lot of examples to the network and we’ll see the results in the output layer, every time we get a bad answer, an algorithm will change the values of the weights.
For a more detailed explanation, you can read my article A Funny and Super Easy introduction to Artificial Intelligence and Machine Learning

Male vs Female experiment
Rosenblatt led the design of a computer to implement this idea and tried to train it to recognize the differences between males and females in photos.
"the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence."
The New York Times report on the Perceptron.
Although the perceptron initially seemed promising, it was quickly proved that perceptrons could not be trained to recognize many classes of patterns. The system couldn’t understand the difference between males and females.
This is when the collapse of AI started.
The Book
What the researchers didn’t know when they were working on this problem, is something that the AI community discovered later: to recognize complex patterns we need more than one layer of hidden neurons, and this is the key concept of what we know today as Deep Learning.
The book Perceptrons, published in 1969 by Marvin Minsky and Seymour Papert, presented mathematical proofs which acknowledge some of the perceptron’s strengths while also showing major limitations. The most important one is related to the computation of one of the most simple operations that a CPU usually performs: the XOR function.
In simple terms, XOR is a logical function that returns true if just one of the inputs is true. In general terms, XOR is a logic gate that returns true if the number of true inputs is odd. What the book was demonstrating is that a perceptron with just one hidden layer was not capable of mapping an XOR function, implying also that it was incapable of replicating several complex functions. To process complex functions we just needed more layers, but the community just ignored this fact.

Rosenblatt and Minsky became central figures of a debate inside the AI research community, and are known to have promoted loud discussions in conferences, yet remained friendly ¹.
The conclusions of the book were wrongly interpreted as showing that further progress in neural networks was not possible and that this approach to AI had to be abandoned.
Marvin Minsky remained skeptical his whole life and even in his last years, he didn’t believe in the advances of AI. He made a lot of poor predictions even being an expert in the field⁶.

The AI Wars
The hype is common in many emerging technologies, such as the railway mania, the dot-com bubble, and more recently, the rise of Bitcoin.
In its whole history, the research on AI experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or decades later. The term "AI Winter" was coined by analogy to the idea of a nuclear winter.
When the perceptron was being studied, new approaches, including symbolic AI emerged. The core problem that unleashed this "AI War" was that different groups found themselves competing for funding and people, and their demand for computing power far outpaced available supply ⁴.
The Symbolic AI is easier to understand and its results are easily explainable, as opposed to Neural Networks. To work with symbolic AI we need to describe the universe of a problem to the computer, specifying their objects and rules, and then the computer will make assumptions based on those rules. The scientists that were in favor of this approach were, in general, against the use of Neural Networks.
After some years, researchers found neural networks to be more useful for problems with uncertainty involved, for example in formulating predictions ⁵. But after the publishing of the book "Perceptrons", the funding for research was on the side of the symbolic AI.
In 1973, professor Sir James Lighthill was asked by the UK Parliament to evaluate the state of AI research. His report, now called the Lighthill Report, criticized the failure of AI to achieve its "grandiose objectives." He concluded that nothing being done in AI couldn’t be done in other sciences.
The report led to the complete dismantling of AI research in England for the next 10 years.

Frank Rosenblatt died in July 1971 on his 43rd birthday, in a boating accident in Chesapeake Bay.

The Resurgence
Although the Defense Advanced Research Projects Agency (DARPA) no longer believed in the possibilities of AI, a new project called Dynamic Analysis and Replanning Tool (DART) changed the fate of the research. This tool, used by the U.S. Military to optimize and schedule the transportation of supplies or personnel proved to be so successful that it received in 4 years more funding than all of the funds DARPA had channeled into AI research in the previous 30 years ⁷. Thanks to DART, Operation Desert Shield/Storm in 1990–91 was the largest, fastest, and farthest sealift to a single locale in the history of warfare.
Although DART was not the kind of thing that we now know as AI, it was known as an AI program by then and helped keep the flame of investigation alive.

But even in the mid-2000s researchers in AI deliberately called their work by other names: informatics, Machine Learning, analytics, etc. The field’s reputation was damaged.
In the next years, there were other small AI Winters due to the continuous cycle of hype and disappointment, but in the end, human curiosity followed the path that Rosenblatt suggested and discovered a holy grail in the Neural Networks. In the end, small changes derived in creeping normality, and now AI is everywhere: in our phones, in the bus you take to commute and in those endless recommendations Instagram serves to you.
But, did the symbolic AI survive after all?
Symbolic AI does not allow, for example, to predict the price of gold in the next month. While a Neural Network can do it, it can’t explain the process in between. It’s just like a dark box. The next big step in AI can be the creation of hybrids that merge the advantages of both models ⁸.
The AI Spring and the next war
The AI winter is considered to be over because of the great success of solutions that are powered by machine learning. Google Translate, AlphaGo, Watson, and GPT-3 are some of the rockstars that are driving the AI advances these days. We are currently living an AI Spring since 2010 thanks to these technologies.
But not only can another AI Winter come due to the high expectations that these companies are creating, but we can have another AI War between thew two projects that are competing fiercely to have the latest AI trend: Deepmind (Google) and Open AI (Founded by Elon Musk and others) ⁹.
And while the researchers are focusing on having the best network model to solve real-life problems, discussions on AI ethics are getting harder. Some months ago Google fired a computer scientist that warned about the racist and sexist biases of current AI models ¹⁰.
In the end, we’re humans, we are naturally competitive and conflictive and the only things that can save science are our curiosity and persistence.

References:
[1] A Sociological Study of the Official History of the Perceptrons Controversy (1996). https://journals.sagepub.com/doi/10.1177/030631296026003005
[2] The Man Who Tried to Redeem the World with Logic (2015). https://nautil.us/issue/21/information/the-man-who-tried-to-redeem-the-world-with-logic
[3] Neurotic Neurons. https://ncase.me/neurons/
[4] Computational Power and the Social Impact of Artificial Intelligence (2018). https://arxiv.org/abs/1803.08971v1
[5] Symbolic Artificial Intelligence and Numeric Artificial Neural Networks: Towards a Resolution of the Dichotomy (1995). https://link.springer.com/chapter/10.1007%2F978-0-585-29599-2_11
[6] How Accurate Was Marvin Minsky in His AI Predictions? (2020). https://www.brightworkresearch.com/how-accurate-was-marvin-minsky-in-his-ai-predictions/
[7] DART: revolutionizing logistics planning (2002). https://ieeexplore.ieee.org/document/1005635
[8] AI’s next big leap (2020). https://knowablemagazine.org/article/technology/2020/what-is-neurosymbolic-ai
[9] Has OpenAI Surpassed DeepMind? (2020). https://analyticsindiamag.com/has-openai-surpassed-deepmind/
[10] We read the paper that forced Timnit Gebru out of Google. Here’s what it says (2020). https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/
[11] https://tripleampersand.org/kernelled-connections-perceptron-diagram/
[14] http://dbe.rah.es/biografias/10967/santiago-ramon-y-cajal
[15]https://es.wikipedia.org/wiki/Walter_Pitts#/media/Archivo:Lettvin_Pitts.jpg
[16]https://en.wikipedia.org/wiki/Marvin_Minsky#/media/File:Marvin_Minsky_at_OLPCb.jpg
[17]https://es.wikipedia.org/wiki/Seymour_Papert#/media/Archivo:Papert.jpg
[18]https://es.wikipedia.org/wiki/James_Lighthill#/media/Archivo:James_Lighthill.jpg