Convolutional neural network driven by image recognition has been shown to be able to explain cortical responses to static pictures at ventral-stream areas. Here, we further showed that such CNN could reliably predict and decode functional magnetic resonance imaging data from humans watching natural movies, despite its lack of any mechanism to account for temporal dynamics or feedback processing. Using separate data, encoding and decoding models were developed and evaluated for describing the bi-directional relationships between the CNN and the brain (Wen et al., 2017). Here, we provided the main source codes (Matlab or Python) that are related to this study.
Reference: Wen, H., Shi, J., Zhang, Y., Lu, KH., Cao JY. & Liu, Z. (2017). Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision. Cerebral cortex. In press.
Cite this work
Researchers should cite this work as follows:
- Haiguang Wen, Junxing Shi, Yizhen Zhang, Kun-Han Lu, Jiayue Cao, Zhongming Liu (2017). Source code for Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision. (Version 2.0). Purdue University Research Repository. doi:10.4231/R7N58JJ3
In the subfunction file amir_sig_isvd.m, the function name “amir_sig_svd” changed to “svd” in line 74.
Laboratory of Integrated Brain Imaging
This publication belongs to the Laboratory of Integrated Brain Imaging group.