Chronological Representation

How can a neural network have chronological memory?

Egor Dezhic
Towards Data Science

--

It’s crucial to know the chronological order of events to learn causality, to plan, to synchronize activities in societies and for many other reasons. However, it’s still a huge challenge for both neuroscientists to understand how time is represented in brains and for AI researchers to make agents able to operate in constantly changing environments.

Usually cognitive scientists, unlike physicists, treat time completely different from space. Neuroscientists already discovered a lot of mechanisms responsible for circadian rhythms, heartbeat, brainwaves and other periodic biological “clocks”, as well as timers operating on the millisecond-second scale. However, generation and storage of event memories and representation of time for AI agents are still open questions.

Most AI algorithms fundamentally treat time series as snapshots of the environment made each X seconds while events are stored as direct copies of inputs or some intermediate representations. In many cases this strategy works well, but it is quite primitive. One recent work combines differential equations with neural networks to significantly improve their ability to work with data sampled at different intervals.

On the other hand, the workings of event memory in brain are usually associated with the hippocampus. While the spatial representation of the position of an animal generated in the hippocampus is quite well understood, temporal part still poses a problem. What we know about it for sure: hippocampus maintains more-or-less stable neurogenesis throughout the whole life and neurons inside it can migrate away. And just these 2 mechanisms together with stuff that all neurons do like growing synapses and Hebbian learning can provide a flexible solution.

Imagine the following scenario. You are generating neurons at the rate proportional to the level of excitement or some other emotional state that signals how much new potentially useful information is contained in the current environment. Then, each time the neuron is ready, create links to active representations across the brain as well as previously generated neurons.

You’ll end up with a chain of neurons that chronologically represent all experiences of the agent and as well as the connections between them. Intuitively this is similar to a blockchain of experiences. What’s next? For example, you can optimize it by dropping timesteps which contain the smallest amount of information during sleep to improve the speed of search in the chain. Or, you can add more mechanisms that regulate the strength of connections between nodes, like making connections stronger if the timespan between the creation of two nodes was smaller.

Does the brain store memories in this way? I don’t know, but I suppose that hippocampus does something similar. Is it compatible with existing research on temporal memory? I think so.

Resources

  • arxiv.org/1806.07366 — “Neural Ordinary Differential Equations” by Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, David Duvenaud
  • arxiv.org/1901.03559 — “An investigation of model-free planning” by Arthur Guez, Mehdi Mirza, Karol Gregor, Rishabh Kabra, Sébastien Racanière, Théophane Weber, David Raposo, Adam Santoro, Laurent Orseau, Tom Eccles, Greg Wayne, David Silver, Timothy Lillicrap
  • cshperspectives.cshlp.org/7/2/a021808.full — “Place Cells, Grid Cells, and Memory” by May-Britt Moser, David C. Rowland, Edvard I. Moser

--

--