Data for Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision Tests

Listed in Series/Dataset publication by group Laboratory of Integrated Brain Imaging

By Haiguang Wen1, Junxing Shi1, Yizhen Zhang1, Kun-Han Lu1, Jiayue Cao, Zhongming Liu2

1. Purdue University 2. Weldon School of Biomedical Engineering, School of Electrical and Computer Engineering, Purdue University

This is a video-fMRI dataset contains the videos with stimuli acquired by the Laboratory of Integrated Brain Imaging (LIBI).

Version 1.0 - published on 15 Sep 2017 doi:10.4231/R7SF2TCW - cite this Archived on 16 Oct 2017

Licensed under CC0 1.0 Universal

image001.png

Description

This study brings major advances in encoding and decoding cortical activity that supports human natural vision. For encoding, it demonstrates the unique promise of using deep learning to model and visualize the functional representations at the level of single cortical locations along the entire visual pathway, and to create a computational workbench for high-throughput vision research. For decoding, the study presents a stand-alone, efficient, reliable, and generalizable strategy to decode cortical fMRI activity to directly reconstruct the visual and semantic experiences during natural vision. These unique capabilities highlight a promising emerging direction of using the artificial brain to under-stand the biological brain.

The Laboratory of Integrated Brain Imaging (LIBI) at Purdue University acquired 3T fMRI responses from three subjects during watching natural movies (Wen et al., 2017). The movie stimuli contain diverse yet representative of real-life visual experiences, e.g. people in action, moving animals, nature scenes, outdoor or indoor scenes etc. The stimuli include two sets of movie segments: 1) 18 training movie segments, and 2) 5 testing movie segments. The duration of each segment is 8 minutes. During each fMRI scanning session, one segment was presented to the subjects. For each subject, the training movie segments were presented twice and the testing movie segments were presented ten times. In total, there are 11.47 hours of fMRI responses to 3.07 hours of movie stimuli for each subject.

This is publication series that contains data from three subjects, datasets with the stimuli and source code.

Wen, H., Shi, J., Zhang, Y., Lu, KH., Cao JY. & Liu, Z. (2017). Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision. Cerebral cortex. In press.

Content List

Cite this work

Researchers should cite this work as follows:

Tags

Laboratory of Integrated Brain Imaging

Laboratory of Integrated Brain Imaging group image

The Purdue University Research Repository (PURR) is a university core research facility provided by the Purdue University Libraries and the Office of the Executive Vice President for Research and Partnerships, with support from additional campus partners.