Source code for Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision

Listed in Datasets publication by group Laboratory of Integrated Brain Imaging

By Haiguang Wen1, Junxing Shi1, Yizhen Zhang1, Kun-Han Lu1, Jiayue Cao, Zhongming Liu2

1. Purdue University 2. Weldon School of Biomedical Engineering, School of Electrical and Computer Engineering, Purdue University

This document includes the main source code (Matlab or Python) related to our study.

Additional materials available

Version 1.0 - published on 25 Sep 2017 doi:10.4231/R7V98675 - cite this Archived on 26 Oct 2017 Last public release: 2.0

Licensed under CC0 1.0 Universal

Description

Convolutional neural network driven by image recognition has been shown to be able to explain cortical responses to static pictures at ventral-stream areas. Here, we further showed that such CNN could reliably predict and decode functional magnetic resonance imaging data from humans watching natural movies, despite its lack of any mechanism to account for temporal dynamics or feedback processing. Using separate data, encoding and decoding models were developed and evaluated for describing the bi-directional relationships between the CNN and the brain (Wen et al., 2017). Here, we provided the main source codes (Matlab or Python) that are related to this study. 

Reference:  Wen, H., Shi, J., Zhang, Y., Lu, KH., Cao JY. & Liu, Z. (2017). Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision. Cerebral cortex. In press. 

 

Cite this work

Researchers should cite this work as follows:

Tags

Laboratory of Integrated Brain Imaging

Laboratory of Integrated Brain Imaging group image

The Purdue University Research Repository (PURR) is a university core research facility provided by the Purdue University Libraries and the Office of the Executive Vice President for Research and Partnerships, with support from additional campus partners.