The world’s leading publication for data science, AI, and ML professionals.

Introduction to Cognitive Computational Modelling of Human Brain (Part-I)

Machine Learning & Cognitive Tasks

Thoughts and Theory

Visualization of Anatomic fMRI (Image By Author)
Visualization of Anatomic fMRI (Image By Author)

Materials

This is the very first article of the series, namely "Cognitive Computational Modelling for Spatio-Temporal fMRI in Ventral Temporal Cortex". If you want to check out the whole series, go to the following link.

Cognitive Computational Modelling for Spatio-Temporal fMRI in Ventral Temporal Cortex

I will introduce the topic of cognitive computational modeling and its use case in the research of brain decoding. Let’s get started.

All related materials are hosted on my Github page. Don’t forget to check it out. If you are a paper lover, you can read the paper version of this series of articles that can also be found in my repo.

Cognitive-Computational-Modelling-for-Spatio-Temporal-fMRI-in-Ventral-Temporal-Cortex


The ventral temporal cortex in the human brain is selective to the different representations of the visual stimuli from nature and the ventral object vision pathway generates distributed and overlapping neural responses [21]. Single-cell studies are conducted to demonstrate that the differential tuning of individual neurons in the ventral temporal cortex in nonhuman primates is selective of the objects from different kinds and form representative features [6, 21]. However, their order of selectivity is not generalizable and scalable to a higher degree of object representations [8]. To model the neuro architecture of the ventral cortex, statistical algorithms are developed but the uncertainty in the pathway remains. Recent developments regarding neuroimaging have demonstrated that spatio-temporal decoding of human perception, memories, and thoughts are decodable via functional magnetic resonance imaging (fMRI) methods [11]. However, the complexity and the distribution of fMRI data require sophisticated scientific tools because of the neural capacity of the spatio-temporal resolution. With the advancements of machine learning, neuroscientists discover statistical and structural patterns in large-scale fMRI datasets to solve various tasks in the context of neuroscience. Further, recent advances in deep learning enable researchers to solve unsolved neuroscientific tasks [12] and concretely show the importance of deep learning. In this study, we build end-to-end discovery machine learning pipelines to decode the category of visual stimuli viewed by a human subject based on fMRI data. We utilize the state-of-the-art explanatory neuroimaging technologies such as echo-planar, region-of-interest (RoI), statistical map, anatomical and glass brain methods, to visualize and pre-analyze the visual structure of fMRI samples.

My experiments are based on a block-designed 4-D time-series fMRI dataset, namely the Haxby dataset [7, 15, 8], from the study of face and object representation. It consists of 6 subjects with 12 runs per subject [8]. In each run, the subjects passively viewed grey-scale images of eight object categories, grouped in 24s blocks separated by rest periods [8, 7]. Each image was shown for 500ms and was followed by a 1500ms inter-stimulus interval [7]. Full-brain fMRI data were recorded with a volume repetition time of 2.5s, thus, a stimulus block was covered by roughly 9 volumes [8]. It consists of per-subject high-resolution anatomical images except for the sixth, 4D fMRI time-series image data in the shape of 1452 volumes with 40x64x64 voxels (corresponding to a voxel size of 3.5 x 3.75 x 3.75 mm and a volume repetition time of 2.5 seconds) [8]. We have 8 different stimuli categories that are scissors, face, cat, scrambled pix, bottle, chair, shoe, and house. The chunks of resting-state is eliminated as it provides no additional information on decoding visual stimuli [8].

Examples of Visual Stimuli (Image By Author)
Examples of Visual Stimuli (Image By Author)

Before diving into the Python code of fetching the Haxby dataset and its exploratory fMRI analysis, let’s look at the bird’s eye view to whole analysis and how cognitive computational modeling can be performed in the context of neural decoding.

1. Discovery Neuroimaging Analysis

As discovery neuroimaging analysis, we performed the state-of-the-art explanatory neuroimaging technologies such as echo-planar, region-of-interest (RoI), statistical map, anatomical and glass brain methods, to visualize and pre-analyze the visual structure of fMRI samples. We’ll discuss in-depth in part II of our article series.

2. Functional Connectivity and Similarity Analysis of Ventral Temporal Cortex

We performed functional connectivity analysis based on the correlation, precision, and partial correlation, and similarity analysis based on the cosine, minkowski, and euclidean distance to discover overlapping representation in the ventral temporal cortex.

This is quite useful in explanatory fMRI analysis as it shows how distributed regions in human brain shares similar features from statistical and mathematical perspective. We’ll discuss in-depth in part III of our article series.

3. Manifold Learning and Dimension Reduction in the Distributed Regions in Human Brain

Manifold learning and dimensionality reduction methods are performed on the per subject ventral temporal masks to extract latent variables of spatio-temporal masks that will help further decoding of the human brain. As dimensionality reduction methods, we applied Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Independent Component Analysis (ICA), Non-Negative Matrix Factorization (NNMF) and Multidimensional Scaling (MDS) then compare these obtained subspaces by their 3D visualization. Additionally, we performed manifold learning algorithms to extract underlying manifold distribution in masked ventral temporal regions. We performed t-Stochastic Neighbour Embedding (t-SNE), Uniform Manifold Approximation and Projection (UMAP), ISOMAP, Locally Linear Embedding (LLE), and Spectral Embedding (SE) then compare their lower-dimensional manifolds by their 3D visualizations that further help in the decoding process.

This will give a comprehensive introduction to understanding geodesic relations in the ventral temporal cortex of the human brain. From the ML perspective, it will be a depth review of the intersection of the unsupervised learning and human brain. We’ll discuss in-depth in part IV of our article series.

At this point, we only performed discovery analysis to understand fMRI data samples and their distribution. Next, we’ll dive into the decoding process.

4. Spatio-Temporal fMRI Decoding: ML & DL Algorithms

End-to-end machine learning algorithms are developed to categorize the stimuli based on distributed and overlapping regions in the ventral temporal cortex. Precisely, we performed the following machine learning algorithms; Linear support vector classifier (LinearSVC), Stochastic Gradient Descent Classifier (SGDClassifier), Multi-Layer Perceptron (MLP), Perceptron, Logistic Regression, Logistic Regression Cross-Validation, Support Vector Classifier (SVC), Calibrated Classifier (Probability calibration with isotonic regression), Passive Aggressive Classifier, Label Propagation Classifier, Random Forest Classifier, Gradient Boosting Classifier, Quadratic Discriminant Classifier, Ridge Classifier Cross-Validation, Ridge Classifier, AdaBoost Classifier, Extra Trees Classifier, K-Neighbors Classifier, Bernoulli Naive Bayes Classifier, Gaussian Naive Bayes Classifier, Nu-Support Vector Classifier, Nearest Centroid Classifier and Bagging Classifier. As a robust ensemble decoding, we applied novel ensemble of regularized models; FREM: Cross-Validated Ensemble of L2 regularized SVCs, FREM: Cross-Validated Ensemble of L2 regularized Logistic Regressions. We further constructed cognitive neural networks, precisely MLPs with GELU nonlinearity [10], 2D and 3D Convolutional Neural Network by taking the advantage of interactions between different streams of visual representations. We’ll discuss in-depth in part-V of our article series and this will be the last article of our series.

Yes, that’s huge. I know but it is crucial to perform many decoding experiments and compare their results. No worries, we’ll implement nearly all ML algorithms in just a "two" lines of codes. Yeah, I am serious. We’ll implement nearly all algorithms in just two lines of code. The power of the ML tools!

Let’s start coding. In this article, we’ll only download the Haxby dataset from the web with just one line of code using "nilearn" framework that will be introduced later in series of articles and explore the structure of the dataset.

First thing first, we need to install the necessary Python packages. Open your favorite Jupyter Notebook and copy-and-paste the following code for necessary installations.

Then, let’s import all necessary packages that we will be using along this journey. If you want to save your results, create a folder called "images" and "results" or you can just remove the lines 66–67 below.

We are ready to start! Please read the docstring below for Haxby dataset understanding, it may not be easy to understand at the first glimpse but no worries. We’ll discuss more later on. Note that downloading data from the web takes approximately 30 minutes depends on your download speed, etc.

Yes, we download the fMRI dataset. When you, print _haxbydataset, you’ll see a screen like that.

Let’s briefly discuss the output. Detailed descriptions are made above and you can also go to the references for Haxby dataset. There are,

  • anat: Anatomical structure of fMRI data
  • func: Nifti Images of fMRI data (Will be converted NumPy Matrix)
  • _sessiontarget: Files corresponds to our target variable (Will be discussed later)
  • _mask, mask_vt, maskfaces, … (Different spatial masks for extracting activated areas in ventral temporal cortex)

Let’s dive deeper. Here, we print the filenames of the fMRI data of the subjects. Don’t worry about "nii.gz" format. It is just an efficient representation of the fMRI data. We’ll decode this extension using "Nilearn" library with ease and convert it into a NumPy array.

That’s it for this article. We covered how and why we are constructing spatio-temporal computational techniques for understanding the human brain. We downloaded the Haxby dataset from the web. Discussed the dataset. Finally, briefly reviewed the dataset. Congratulations! You completed the first article and took a step through cognitive computational approaches for decoding the human brain.

In the next article, we’ll analyze and visualize the fMRI dataset using state-of-the-art neuroimaging approaches.

Links of Articles

  1. Published Articles

Introduction to Cognitive Computational Modelling of Human Brain (Part-I)

2.

Discovery Neuroimaging Analysis (Part-II)

3.

Functional Connectivity and Similarity Analysis of Human Brain (Part-III)

4.

Unsupervised Representation Learning on Distributed Regions in the Human Brain (Part-IV)

  1. On the Way (Coming soon…)
  2. Placeholder for Part-V

Further Reading

The following list of references is utilized in my research for both machine learning and neuroscience sides. I highly recommend copy-and-paste the references and review them in brief.

References

[1] J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization, 2016.

[2] L. Buitinck, G. Louppe, M. Blondel, F. Pedregosa, A. Mueller, O. Grisel, V. Niculae, P. Prettenhofer, A. Gramfort, J. Grobler, R. Layton, J. VanderPlas, A. Joly, B. Holt, 10 and G. Varoquaux. API design for machine learning software: experiences from the scikit-learn project. In ECML PKDD Workshop: Languages for Data Mining and Machine Learning, pages 108–122, 2013.

[3] X. Chu, Z. Tian, Y. Wang, B. Zhang, H. Ren, X. Wei, H. Xia, and C. Shen. Twins: Revisiting the design of spatial attention in vision transformers, 2021.

[4] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive aggressive algorithms. 2006.

[5] K. J. Friston. Statistical parametric mapping. 1994.

[6] C. G. Gross, C. d. Rocha-Miranda, and D. Bender. Visual properties of neurons in inferotemporal cortex of the macaque. Journal of neurophysiology, 35(1):96–111, 1972.

[7] S. J. Hanson, T. Matsuka, and J. V. Haxby. Combinatorial codes in ventral temporal lobe for object recognition.

[8] J. Haxby, M. Gobbini, M. Furey, A. Ishai, J. Schouten, and P. Pietrini. "visual object recognition", 2018.

[9] R. A. Heckemann, J. V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers. Automatic anatomical brain mri segmentation combining label propagation and decision fusion. NeuroImage, 33(1):115–126, 2006.

[10] D. Hendrycks and K. Gimpel. Gaussian error linear units (gelus), 2020.

[11] S. Huang, W. Shao, M.-L. Wang, and D.-Q. Zhang. fmribased decoding of visual information from human brain activity: A brief review. International Journal of Automation and Computing, pages 1–15, 2021.

[12] R. Koster, M. J. Chadwick, Y. Chen, D. Berron, A. Banino, E. Duzel, D. Hassabis, and D. Kumaran. Big-loop recurrence ¨ within the hippocampal system supports integration of information across episodes. Neuron, 99(6):1342–1354, 2018.

[13] E. Maor. The Pythagorean theorem: a 4,000-year history. Princeton University Press, 2019.

[14] K. A. Norman, S. M. Polyn, G. J. Detre, and J. V. Haxby. Beyond mind-reading: multi-voxel pattern analysis of fmri data. Trends in cognitive sciences, 10(9):424–430, 2006.

[15] A. J. O’toole, F. Jiang, H. Abdi, and J. V. Haxby. Partially distributed representations of objects and faces in ventral temporal cortex. Journal of cognitive neuroscience, 17(4):580–590, 2005.

[16] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.

[17] R. A. Poldrack. Region of interest analysis for fmri. Social cognitive and affective neuroscience, 2(1):67–70, 2007.

[18] M. Poustchi-Amin, S. A. Mirowitz, J. J. Brown, R. C. McKinstry, and T. Li. Principles and applications of echo-planar imaging: a review for the general radiologist. Radiographics, 21(3):767–779, 2001.

[19] R. P. Reddy, A. R. Mathulla, and J. Rajeswaran. A pilot study of perspective taking and emotional contagion in mental health professionals: Glass brain view of empathy. Indian Journal of Psychological Medicine, page 0253717620973380, 2021.

[20] S. M. Smith, K. L. Miller, G. Salimi-Khorshidi, M. Webster, C. F. Beckmann, T. E. Nichols, J. D. Ramsey, and M. W. Woolrich. Network modelling methods for fmri. Neuroimage, 54(2):875–891, 2011.

[21] K. Tanaka. Inferotemporal cortex and object vision. Annual review of neuroscience, 19(1):109–139, 1996.

[22] M. S. Treder. Mvpa-light: a classification and regression toolbox for multi-dimensional data. Frontiers in Neuroscience, 14:289, 2020.

[23] M. P. Van Den Heuvel and H. E. H. Pol. Exploring the brain network: a review on resting-state fmri functional connectivity. European neuropsychopharmacology, 20(8):519–534, 2010.

[24] G. Varoquaux, A. Gramfort, J. B. Poline, and B. Thirion. Brain covariance selection: better individual functional connectivity models using population prior. arXiv preprint arXiv:1008.5071, 2010.

[25] Y. Wang, J. Kang, P. B. Kemmer, and Y. Guo. An efficient and reliable statistical method for estimating functional connectivity in large scale brain networks using partial correlation. Frontiers in neuroscience, 10:123, 2016.

[26] S. Wold, K. Esbensen, and P. Geladi. Principal component analysis. Chemometrics and intelligent laboratory systems, 2(1–3):37–52, 1987.

[27] S. Wold, K. Esbensen, and P. Geladi. Principal component analysis. Chemometrics and intelligent laboratory systems, 2(1–3):37–52, 1987.


Related Articles