Exploring neuroimaging using machine learning

Neuroimaging, ways of understanding how the brain produces images, produces sets of data that are high-dimensional and complicated. Ways of interpreting this data provides the means for understanding how the brain encodes and decodes images. In this context, encoding refers to predicting the imaging data given external variables, such as stimuli descriptors and decoding refers to learning a model that predicts behavioral or phenotypic variables from fMRI data. With the way these models can be learned and predicted, supervised machine learning methods can be used to decode images to relate brain images to behavioral or clinical observations. Sci-kit learn can be used for this analysis in making predictions that can be cross-validated.

I've explored Nilearn, a Python module that uses simple interfaces for people to apply machine learning to neuroimaging data. This module would allow me to get the best visualizations for raw data and processed results, and it's built on scikit-learn, a popular Python machine learning module.

In my fMRI project, I re-create the methods of Miyawaki et al. (2008) in inferring visual stimulus from brain activity. In the experiment of Miyawaki et al. (2008) several series of 10×10 binary images are presented to two subjects while activity on the visual cortex is recorded. In the original paper, the training set is composed of random images (where black and white pixels are balanced) while the testing set is composed of structured images containing geometric shapes (square, cross…) and letters. I will use the training set with cross-validation to get scores on unknown data. I can examine decoding (the reconstruction of visual stimuli from fMRI) and encoding (prediction of fMRI data from descriptors of visual stimuli). This would let me look at the relation between stimuli pixels and brains voxels from both angles. The approach uses a support vector classifier and logistic ridge regression in the prediction function in both the decoding and the encoding.

This June, I'll begin work in a neuroscience lab where I will be using computational methods to study the zebrafish brain. I hope to cultivate more skills as part of this intrinsic, self-driven passion for neuroscience from a computational perspective. The dynamic interplay of experimental and theoretical models in evaluating and re-evaluating hypotheses is fascinating. 


Miyawaki, Y., Uchida, H., Yamashita, O., Sato, M.-A., Morito, Y., Tanabe, H. C., et al. (2008). Visual image reconstruction from human brain activity using a combination of multiscale local image decoders. Neuron 60, 915–929. doi: 10.1016/j.neuron.2008.11.004