The power of astrocytes in memory processing ✨; SISSA’s researchers towards a biologically inspired Reinforcement Learning 🧬; Princeton Uni 🎓 and mathematical neural networks memory manifold 👩 🔬

Why you should care about Neuroscience?
Neuroscience is the root for nowadays artificial intelligence 🧠🤖. Reading and being aware of the evolution and new insights in neuroscience not only will allow you to be a better "Artificial Intelligente" guy 😎 , but also a finer neural network architectures creator 👩 💻 !
This month 3 amazing papers! The first one shows us the importance of astrocytes in the human brain and artificial neural networks. Astrocytes are pivotal for human memory storing and processing and this paper is a wake-up call for all the data community to look more closely at biological inspired neural nets. The second paper comes from SISSA neuro-researchers, who translate Behavioural Cloning rules to Reinforcement Learning, to make this approach more brain-inspired. The final paper is a formidable job from Princeton, where authors create a new gated Recurrent Neural Network (gRNN) which can store memories without fine parameter tuning, just developing further mathematical insights from gRNN basics. Enjoy! 🙂
Astrocytes mediate analogous memory in a multi-layer neuron-astrocytic network
Yuliya Tsybina, Innokentiy Kastalskiy, Mikhail Krivonosov, Alexey Zaikin, Victor Kazantsev, Alexander Gorban, Susanna Gordleeva, Paper
Can we model astrocytes in artificial neural networks? And what is the role of an astrocyte in the human brain? In this paper, authors focused their attention on a long-unknown problem: how brain process information and store it as a memory, finding a possible solution in astrocytes layers. Astrocytes are part of the family of glial cells. They usually tend to have a star shape and their main duty is to process neurons’ synapses. As an example, a single human astrocyte cell can interact with up to 2 million synapses at a time 😱 In particular, astrocytes modulate synaptic neuron transmissions through calcium ions, resulting in firing rate modulations. These modulations have been proved to be associated with working memory, revealing a key role for astrocytes memory processing.
In this work, authors further develop previous bioinspired neural networks model (SNN) adding astrocytes mediated responses as changes in synaptic weights, to store memories from input images. The SNN model is made up of sparsely connected Izhikevich’s neurons. Here, neurons are described through differential equations which take into account the neuronal-transmembrane potential, depending on the input signal, the total synaptic current from all presynaptic neurons and astrocyte-induced potential modulation via calcium ions. Two layers of neurons are interconnected with the astrocytic layer, modelled by Ullah’s model. Astrocytes can communicate bidirectionally with ensembles of neurons, providing on one side a biological similarity and on the other loading, storage and retrieval of information.
The model was trained to memorize grayscale images (fig.1). These images are converted to input current which feeds the neuronal layer. Neurons fire at different rates, depending on input current amplitude. Such a difference in response triggers the astrocytes calcium response, forming a specific pattern for each input. This distribution of calcium concentration lasts for several seconds and take part in the memory-storing process.

Fig.2 shows a practical example of input-output response for SNN accompanied by astrocyte bidirectional tuning. On one side, an input greyscale image was used to stimulate neuronal activity. The neuronal response is modulated by astrocytes, which, ultimately, are able to store images from calcium concentration signals. This signal lasts for seconds, which allow the system to maintain the original info and retrieve it.

This is a remarkable result which, as always, is a heads up for the Data Science community. Indeed, this paper is a small step forward towards brain-inspired artificial intelligence. Applications can be found in neuromorphic computing, where neuronal and synaptic computations can be enhanced by simply taking into account astrocyte mediated responses. Additionally, the astrocyte layer provides one-shot learning, which is a formidable improvement from usual general neural networks architectures. This approach could help to achieve also better results than deep learning itself, with less data in the training process!
Behavioral Cloning in recurrent Spiking Networks: A Comprehensive Framework
Cristiano Capone, Paolo Muratore, Pier Stanislao Paolucci, Paper
As we saw from the previous paper, learning approaches are a hot topic in Neuroscience. In particular, there are two complementary learning approaches: learning as a result of an error-based strategy and learning from a target-based strategy. In the former, the error information is injected into the neural network and it is used to improve future performances, while in the target-based approach a target is selected and learned. In this paper the authors devise a new more general framework, which can be seen as the source of both error-based and target-based approaches, offering new insights into the neural network learning dynamics. This general view can be seen as a natural evolution of Imitation Learning and Behavioral Cloning. In particular, based on literature, the model assumes the form of spike-timing-based neural network, that are experimentally suggested to be pivotal in the brain.
The authors proposed a recurrent spiking model, where each neuron can expose an observable state which represents the occurrence of a spike from a neuron at a time. This model has to interact with an environment for solving a specific task. Rather than using Reinforcement Learning, the model learns from Imitation Learning optimal policy. Imitation Learning allows the agent to reproduce a set of expert behaviours given a set of states. For the learning step, the model learns through the target rather than from the error. The internal weights are influences by a feedback matrix, whose rank is used as a metric to inspect learning based on its rank value.
Two situations are investigated here: button-and-food task, where the agent has to push a button to unlock the food and reach it, and 2D bipedal walker, from OpenAI datasets, where the agent has to learn to walk and to travel as far as possible. We will focus our attention on the first experiment, whose results are depicted in fig. 3. In this experiment, the agent was trained for different values of the rank of the feedback matrix. All the training conditions show a convergence of results. The reward is at a maximum when high-rank feedback structures are given. In the second experiment, 2D bipedal walker, the spiking time was tuned, rather than the feedback matrix rank, which proved that learning is a result also of a specific pattern of spikes.

Here are the conclusions:
- The feedback matrix rank modifications lead to a higher solution space for the agent, This could shed light on experimental findings of how error propagates in different regions of the brain
- On the other side, typical motor tasks do need and benefit from precise timing coding. This may be necessary for getting finer movement controls to achieve better performances. In this case, high rank is not relevant, while spiking modulations are.
Emergence of Memory Manifolds
Tankut Can, Kamesh Krishnamurthy, Paper
This is not an easy paper to read as its roots are in a new machine-learning mathematical field. I will soon post something about this, in particular about the Martin-Siggia-Rose-De Dominicis-Jansen (MSRDJ) formalism, which is bringing new insights into neural networks. Here is the easy idea behind the paper.
The human brain can store memory and retrieve it at the right time depending on the input task. This means that biological systems can maintain memory for durations that are longer than the intrinsic timescale of neuronal response. This is a formidable problem for computational neural networks, where, even if with recent advances, it is still a great hurdle to make use of memories at the right time.
From a mathematical point of view, the memory/brain system is continually producing variables, which creates an object: the memory manifold. A manifold is a geometrical structure, that lives in a multidimensional space, where each point can be thought of as a fixed point (a solution) of a given input problem. From here further mathematical questions arise: is the manifold stable? when is it unstable? are there bifurcation points? does the manifold has parameters to be tuned?
To tackle this memory-manifold problem computationally, the authors defined a new mathematical perspective for neural networks. To rephrase, the key question of the paper is: how to implement a memory in a neural network without varying quantities or using special symmetries or fine-tuning parameters? The ultimate goal is to expand the memory of a network without playing too much with its parameters, to use memory at the right time, so after a long time no seeing these.
The authors define a memory manifold as a system in a marginal stable point. Then, they define a possible scenario with the "frozen stabilization" (FS). FS is a process where a neural network family can self-organise to a critical state, exhibiting memory manifolds without further tuning. Thus, a recurrent neural network (RNN) framework is developed from its mathematical roots, defining a binary variable that can slow down part of the system, depending on the current system state.

The early results are strongly encouraging. FS allows neural networks to self-organise to a state exhibiting memory manifolds and long timescales. The internal dynamics of neurons is frozen, while a part of it is still evolving and biasing the other half.
I hope you like this review on September 2021 Neuroscience arxivg.org
papers. Please, feel free to send me an email for questions or comments at: [email protected]