Towards Neuroscience-Grounded Artificial Intelligence

Why There Will be no Human-Level Artificial Intelligence Before Understanding Biological Intelligence First.

Vincenzo Lomonaco
Towards Data Science

--

In the last decade we have witnessed a renewed interest in Artificial Intelligence and revamped hopes for its future developments. This new wave of optimism is appreciable not only from the public debate and the commercial hype but also within the research community itself where in a recent survey more than 90% of AI scientist said Human-level AI will be reached by 2075.

However, it still seems to be rather unclear how we are going to get there. Notably, most AI scientists do not think that “copying the brain” would be a good strategy in the pursuit of Human-level AI or Artificial General Intelligence, as we may as well copying its biological constraints.

However, in this brief blog post I’m going to point out a few ideas on why I think there will be no Human-Level Artificial Intelligence before understanding biological intelligence first.

Artificial Intelligence Research: a Whack-A-Mole Game

Despite the remarkable progresses in Artificial Intelligence, thanks to recent advances in machine learning research, a comprehensive and cohesive approach to its development still seems a distant goal. Research in AI is nowadays often carried out in small research labs, working on bottom-up, incremental (and partial) solutions to the problem that will rarely lead to the path towards Human-Level AI systems.

More worryingly, “AI” algorithms and techniques are often focused on very narrow aspects of intelligence taken in isolation. While, on the one hand, studying sub-components in isolation may be important to understand them better and disentangle them from external factors, on the other hand this approach may not be possible if what we want to understand is rather the interaction of these sub-components which gives rise to the emergence of complex intelligent behaviors.

The first approach, very common in AI research today, often translates to a sort of “whack-a-mole” game, where an over-engineered solution to a narrow task turns out to be ineffective or even self-defeating if used in more complex settings where multiple dimensions of the problem are considered at the same time.

To paraphrase F. Chollet in his recent paper “On the Measure of Intelligence”:

“[…] optimizing for a single metric or set of metrics often leads to trade-offs and shortcuts when it comes to everything that isn’t being measured and optimized for […]. In the case of AI, the focus on achieving task-specific performance while placing no conditions on how the system arrives at this performance has led to systems that, despite performing the target tasks well, largely do not feature the sort of human intelligence that the field of AI set out to build.” — F.Chollet, “On the Measure of Intelligence”, 2019

As an example, you may consider different key computational principles we may want our intelligent systems to be endowed with (e.g. , Continual Learning, Representation Learning, Sequence Learning, Compositionality, Sparse Distributed Representations and Computation, etc…). All these principles are focused on specific aspects of intelligence that may all be as well important for developing truly intelligent machines. However, these are rarely studied together and the interactions / conflicts of their algorithmic solutions are often unknown.

Of course if we instead consider multiple dimensions at the same time, the search space for a comprehensive AI solution, rapidly explodes. For example, let us consider a simplistic scenario where we have identified 10 properties to fulfill to get to Human-Level AI. Let us also assume that there exists 10 possible solutions for each of those (if taken in isolation) and suppose that there also exists a unique comprehensive solution naively combining one for each dimension. The total search space would still be 10¹⁰= 10.000.000.000 possible algorithmic solutions.

While as a research community we have undoubtedly made some progress in the last 60 years of this field are we sure we can find this solution without any help?

Figure 2: The number of possible AI solutions given a set of properties to fulfill (x, y, z) starts from 5 and ends up with 125 when we increase the number of properties from 1 to 3 with an exponential trend.

The Frankenstein Approach

A common and shared view among many AI researchers is that, maybe we do not need to search in this huge space of possibilities in the research of the “master algorithm” after all. We may just compartmentalize and maintain isolated different solutions that are great at solving different problems and only later patch them together in a comprehensive system that should be able balance them out and exploit the functionalities offered by these sub-systems based on the demanding circumstances.

I call this approach the “Frankenstein Approach” as I don’t think just patching things together may constitute a scalable solution leading to complex and emergent intelligent behaviors (even thought it may reveal itself useful for some practical applications). On the contrary, I believe the most interesting approach to AI research is to understand how key computational principles synergically, efficiently and effectively play out together to enable the stunning emerging proprieties we can observe in biological intelligent systems.

Figure 3: “ Young Frankenstein” movie parody, 1974. Igor chooses the wrong brain to transplant ending up with a demented Frankenstein.

Gary Marcus is one of the most prominent defender of this view with its proposals of “Hybrid AI Systems”, putting together connectionists with symbolic and probabilistic approaches:

The most powerful A.I. systems … use techniques like deep learning as just one element in a very complicated ensemble of techniques, ranging from the statistical technique of Bayesian inference to deductive reasoning. — Gary Marcus, 2012

While Yoshua Bengio is more in line with my personal view:

What I bet is that a simple hybrid in which the output of the deep net are discretized and then passed to a GOFAI symbolic processing system will not work. Why? Many reasons: (1) you need learning in the system 2 component as well as in the system 1 part, (2) you need to represent uncertainty there as well (3) brute-force search (the main inference tool of symbol-processing systems) does not scale, instead humans use unconscious (system 1) processing to guide the search involved in reasoning, so system 1 and system 2 are very tightly integrated and (4) your brain is a neural net all the way. — Yoshua Bengio, 2019

Neuroscience-Grounded AI

One possible solution to the problem would be to get a bit of help from nature in guiding our titanic search towards Human-Level AI. This argument has been used by many, most notably by Demis Hassabis and his quest to Neuroscience-Inspired Artificial Intelligence. However, at the same time, other AI researchers, including the Turing prize winner Yann LeCun for example, warn us about looking into biology, which may be also counterproductive. The main argument for this idea is often exemplified with a parallel with the history of flying machines.

The majestic Avion III, that I had the pleasure to personally see at the Musée des Arts et Métiers in Paris, was a primitive steam-powered aircraft built by Clément Ader between 1892 and 1897. Its design was heavily inspired by nature (bats in particular) with articulated wings, however the vehicle was unable to fly and research on it was stopped in 1898 by the French Government. The first flying machine was built by the Wright brothers instead, around 1903.

Figure 4: Y. Lecun Slide, from a talk in 2016.

The “Wright Flyer” design was not created taking inspiration from birds, but rather based the study of the principle of aerodynamics. Hence, this example is often cited as a compelling argument to not copy from biology but just take inspiration from it: flapping wings, feathers and so on, turned out to be not that important for flying efficiently.

However, I believe this argument has been mostly misunderstood from the AI research community so that anything that is not related to the mainstream AI research and takes even slightly more inspiration from biology is often labeled as “doomed to fail” if it cannot reach the right percentage of accuracy on a couple of standard benchmarks.

The Wright Brothers spent a great deal of time observing birds in flight. In fact, while it may be true that biology may pose constraints that are not specifically related to Intelligence, it’s only through the careful study of biological systems that we can understand the right level of abstraction, the key principles of Intelligence, disentangling them from “implementation” details that may be there for different evolutionary reasons.

Let’s be honest here, most machine learning researchers (including myself) don’t have the knowledge, time and energy to look into other disciplines and really understand what’s going on there. When they say they are “taking inspiration from the brain” is at an incredible high-level of abstraction (think of memory replay for example). Moreover, they are often not willing to lose their edge on real-world, practical applications, and won’t take any risks to look into biology at all.

A Neuroscience-Grounded framework for Intelligence

At the same time, it is still unclear to me how we can imagine to create truly intelligent machines without a clear understanding on what Intelligence is in the first place. How we ended up calling AI even a simple algorithm to recognize handwritten digits?

I believe the AI research focus shouldn't be about incremental improvements over narrow applications, but more about the development of a computational framework of intelligence which could explain the emergence of intelligent behaviors in humans and machines.

In this blog post, I use the term “Neuroscience-Grounded AI” as I think we should develop this framework of intelligence not just taking loosely inspiration from biology and neither being constrained by its implementation details, but rather developing a theory of intelligence that should be strongly guided and grounded in theoretical and computational neuroscience discoveries.

Practically speaking, this would mean to loop over the following steps:

  1. Identify key computational principles, disentangling them from implementation details.
  2. Understand their interactions and emerging properties.
  3. Validate the framework in silicon (in simulations as well as practical applications) and neuroscience data.

There is no point in developing and focusing on over-engineered narrow AIs that do not include all the pre-identified principle of intelligence (whack-a-mole, over-engineering risk) if the objective is to reach Human-Level AI.

On the convergence of AI and Neuroscience

I believe that in the next decade, thanks to significant progresses in brain imagining, brain-computer interfaces, neuroscience discoveries, etc. it will appear clear that the fastest path to Human-level AI is through the understanding of biological intelligence first.

What many AI researchers fail often to recognize is that we are already slowly moving in that direction (see Figure 5 below). Neural networks for example are slowly integrating interesting features and computational properties that have been always recognized to be important in Neuroscience. Take “Continual Learning” for example: this is a very recent trend in neural networks (with a growing interest from 2016), but it has long been identified as a key computational property at the very basis of biological learning and synaptic plasticity.

The problem is that we move back and forth on the neuroscience-grounding axis just in favor of the practical applications of the moment, but in the long run, we are just slowing down our path towards Human-Level AI.

Figure 5: The progress in AI can be already seen as more and more neuroscience grounded over time.

While we may find smarter solutions than evolution on specific, narrow problems (e.g. playing Chess or GO), the exponential complexity of a general intelligent system may be just too hard to grasp without any help from nature in guiding our research.

This acknowledgement will trigger a natural interplay between Artificial Intelligence and Neuroscience at an unprecedented level. Moreover, a shared framework for intelligence in this context will allow us not only to build smarter machines, but also to understand ourselves better and possibly expand our cognitive capabilities, narrowing the demarcation line between humans and machines.

Vincenzo Lomonaco, the author of this story.

If you’d like to see more posts on AI and Continual Learning follow me on Medium or join ContinualAI.org: an open community of more than 600 researchers working together on this fascinating topic! (Join us on slack today! 😄 🍻)
If you want to get in touch, visit my website vincenzolomonaco.com! 😃

--

--

AI & Continual Learning Assistant Professor @ Unipi | Co-Founding President & Lab Director @ ContinualAI.org | Personal Website: http://vincenzolomonaco.com