Manifolds and Neural Activity: An Introduction

K L
Towards Data Science
7 min readJul 22, 2019

--

Manifolds are important objects in mathematics and physics because they allow more complicated structures to be expressed and understood in terms of simpler spaces. This is a key motivation to connect this theory with neuroscience to understand and interpret complex neural activity.

The manifold hypothesis states that real-world data (images, neural activity) lie in lower dimensional spaces called manifolds embedded in the high-dimensional space. Loosely manifolds are topological spaces that look locally like Euclidean spaces. To give a simple example of a manifold and to make sense of the first two sentences consider a sphere. A sphere is not an Euclidean space because we cannot connect two points by straight lines but need the concept of geodesics, but locally the laws of the Euclidean geometry are good approximations. The earth for example can be approximated as a sphere. You don’t experience your every day life living on a sphere but rather on a flat plane. So we can say that we “live” on a manifold. I hope this gives a little intuition for what the term manifold means.

Manifolds itself belong to the branches of mathematics of topology and differential geometry. They exist in any dimensional space but for the sake of simplicity and to keep it more intuitive we consider here only three dimensional spaces.

In this article I don’t want to dive deep into the math behind it but rather explore its relevance in neuroscience and how it can be used to gain more insights from neural activity data. If you are interested in a more in-depth mathematical explanation of manifolds you might find this article interesting.

Why are we interested in manifolds in neural activity?

Recently many studies of neural systems are undergoing a paradigm shift from single neuron to population-level hypotheses and analyses. Networks in the brain consists of thousand of neurons. We could expect that the number of degree of freedom for a network is as big as its number of neurons. However, studies [1] have shown experimental evidence that suggests that local brain activity is confined to a subspace (low-dimensional manifold) spanned by a few variables.

A key question that arises is what scientific insight can be gained by studying these population of recorded neurons beyond studying each neuron individually. In fact, it turns out that often single units do not show any stimulus specificity when their activity is averaged. And this is where the manifold hypothesis comes in. We want to find structures (or features) that are not apparent at the level of individual neurons. Moreover, simply averaging responses of many neurons could obscure important signals as neural populations often have massive diversity in their cell type, projection targets etc. [2].

In computational neuroscience the manifold hypothesis argues that underlying network connectivity constrains possible patterns of neural population activity and that these patterns are confined to a low-dimensional manifold which is spanned by a few independent variables we can call “neural modes” [3]. Gallego et. al. further state that

“These neural modes capture a significant fraction of population covariance. It is the activation of these neural modes, rather than the activity of single neurons, that provides the basic building blocks of neural dynamics and function.”

To identify these neural modes we need to apply some dimensional reduction method which computes the low-dimensional representation of the high-dimensional neural activity.

Intuition behind dimensionality reduction

Here I will not go into a specific type of dimensionality reduction method like Principal-Component Analysis (PCA) but will provide a general intuition based on [4]. If you are interested in a more profound explanation of PCA see this well written blog-post.

Typically we apply dimensionality reduction on data where we have D measured variables and we suspect that these variables can be better represented (or understood) by a smaller number of “explanatory” variables K. How we extract these K explanatory variables is specific to the method of choice. As one can not directly observe these variables they are termed latent variables. What we try to end up with is a description of statistical features for our data and to exclude some aspects of our data as noise.

In neuroscience, the variable D usually corresponds to the number of observed neurons. As these neurons span an underlying network and thus are likely not independent from each other it can be assumed that we need only a smaller fraction of K latent variables to explain their network activity. Here is a nice way to think about these latent variables from Cunningham & Yu:

“The latent variables can be thought of as common input or, more generally, as the collective role of unobserved neurons in the same network as the recorded neurons.”

What we usually measure is the time series of action potentials emitted by a neuron. In neuroscience this is typically modeled as a Poisson process. The goal of dimensionality reduction is to characterize how the firing rates of different neurons covary (and to exclude the spiking variability as noise). Every neuron provides a different view of the same underlying process as captured by the K latent variables. The latent variables define a K-dimensional space which represents shared activity patterns that are prominent in the population response.

The next part will provide an example of this and how we can use these underlying properties to build a generative model of the activity of individual neurons based on the activation of neural modes.

Neural Manifold

Figure 1: (A) Activity of each recorded neuron is a weighted combination of the time varying activation of the neural modes. (B) Trajectory of time-dependent population activity in the neural space spanned by three recorded neurons (red). This trajectory is mostly confined to the neural manifold which is a plane shown in gray and spanned by the neural modes (green and blue vector). (This Figure is adapted from [3])

As described above recent experimental work suggests that neural function may be built on the activation of specific population-wide activity patterns we call neural modes rather than on the independent modulation of individual neurons. To estimate the number of these neural modes we apply a dimensionality reduction method like PCA to the recorded population activity. The obtained set of neural modes now defines a neural manifold. This manifold can be thought of as a surface which captures most of the variance in the recorded activity data, see gray hyperplane in Figure 1(B). The time-dependent activation of neural modes is called their latent dynamics. The activity of each neuron is represented as a weighted combination of the latent dynamics from all the modes, see Figure 1 (A).

To make what we just said a little bit more explicit think of each neuron as one axis in an N-dimensional state-space where each axis corresponds to the firing rate of a neuron. The activity at a certain point in time corresponds to a point in this space, and the temporal evolution of the neuronal activity constitutes a trajectory [1]. Now, the trajectory (Figure 1 (B) red line) tends to be constrained to a linear subspace (neural manifold) of this state-space rather than moving freely in all directions (Figure 1 (B) grey line).

Each neuron can participate in one or more neural modes and neural modes include a large fraction of the neurons in the population activity. In Figure 1(B) the neural space (or state-space) for three neurons is depicted. Again, each axis represents the activity of one neuron. We mentioned before that the network connectivity constrains possible patterns of population activity, means the population dynamics will not explore the full high-dimensional neural space but will remain within a low-dimensional surface, the “neural manifold”. In our (simple) case this manifold is a flat hyperplane spanned by two neural modes u1 and u2.

Neural modes can be used to build a generative model for the actual neural activity [3]. We can associate each neural mode with a latent variable so that neural activity at any point in time is a sum of the neural modes weighted by the respective latent variable, Figure 1 (A) [1].

These neural modes can now be used to describe task-specific neural manifolds e.g. for motor cortices [5].

Conclusion

A major pursuit of science is to explain complex phenomena in simple terms. Dimensional reduction enables us to study neurons at the population level rather then average population response or studying each neuron individually. Neural modes span a low-dimensional manifold in which neural activity is confined which allows to detect patterns within a network.

Moreover, the neural modes and their latent dynamics have provided increased understanding of the function of many regions throughout the brain with insights that were not apparent at the level of individual neurons [5].

Still, there are open questions in this area of research. For example, the concept of neural manifolds is not restricted to flat surfaces. Neural manifolds might be a nonlinear surface within the neural space. For complex behaviors where the dynamics explore a larger region of the neural space linear methods might be poor estimations and we need nonlinear methods e.g IsoMap.

Note that the studies discussed in this article focus on neural manifold associated with some specific task. Another question is how different manifolds are organized to each other within the neural space.

I hope this article gave you a first impression of what the neural manifold hypothesis is. For further reading I recommend the papers I referenced in this article.

References

[1] “ Perturbing low dimensional activity manifolds in spiking neuronal networks”, E. Wärnberg, A. Kumar

[2] “Conceptual and technical advances define a key moment for theoretical neuroscience”, A. K. Churchland, L. F. Abbott

[3] “Neural Manifolds for the Control of Movement”, J. A. Gallego, M. G. Perich, L. E. Miller, S. A.Solla

[4] “Dimensionality reduction for large-scale neural recordings”, J. P. Cunningham, B. M. Yu

[5] “ A stable, long-term cortical signature underlying consistent behavior”, J. A. Gallego, M. G. Perich, R. H. Chowdhury, S. A. Solla, L. E. Miller

--

--