Data for Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision Tests - Stimuli

Listed in Datasets publication by group Laboratory of Integrated Brain Imaging

By Haiguang Wen1, Junxing Shi1, Yizhen Zhang1, Kun-Han Lu1, Jiayue Cao, Zhongming Liu2

1. Purdue University 2. Weldon School of Biomedical Engineering, School of Electrical and Computer Engineering, Purdue University

This is a video-fMRI dataset contains the videos with stimuli acquired by the Laboratory of Integrated Brain Imaging (LIBI).

Additional materials available

Version 1.0 - published on 15 Sep 2017 doi:10.4231/R71Z42KK - cite this Archived on 17 Oct 2017

Licensed under Attribution 3.0 Unported

Description

How are natural and dynamic sensory stimuli processed by neural circuits? To answer this question, it requires knowledge of the response properties of sensory neurons to natural stimuli (Felsen and Yang, 2005). For this purpose, the Laboratory of Integrated Brain Imaging (LIBI) at Purdue University acquired 3T fMRI responses from three subjects during watching natural movies (Wen et al., 2017). The movie stimuli contain diverse yet representative of real-life visual experiences, e.g. people in action, moving animals, nature scenes, outdoor or indoor scenes etc. The stimuli include two sets of movie segments: 1) 18 training movie segments, and 2) 5 testing movie segments. The duration of each segment is 8 minutes. During each fMRI scanning session, one segment was presented to the subjects. For each subject, the training movie segments were presented twice and the testing movie segments were presented ten times. In total, there are 11.47 hours of fMRI responses to 3.07 hours of movie stimuli for each subject.

This dataset is part of series of tests. The related data from all three subjects are also published in PURR.

Felsen, G., & Yang, D. (2005). A natural approach to studying vision. Nature neuroscience8(12), 1643.

Wen, H., Shi, J., Zhang, Y., Lu, KH., Cao JY. & Liu, Z. (2017). Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision. Cerebral cortex. In press.

Cite this work

Researchers should cite this work as follows:

Tags

Notes

T1 and T2-weighted MRI and fMRI data were acquired in a 3 tesla MRI system (Signa HDx, General Electric Healthcare, Milwaukee) with a 16-channel receive-only phase-array surface coil (NOVA Medical, Wilmington). The fMRI data were acquired at 3.5 mm isotropic spatial resolution and 2 s temporal resolution by using a single-shot, gradient-recalled echo-planar imaging sequence (38 interleaved axial slices with 3.5mm thickness and 3.5×3.5mm 2 in-plane resolution, TR=2000ms, TE=35ms, flip angle=78°, field of view=22×22cm 2 ). The fMRI data were preprocessed and then transformed onto the individual subjects’ cortical surfaces, which were co- registered across subjects onto a cortical surface template based on their patterns of myelin density and cortical folding. The preprocessing and registration were accomplished with high accuracy by using the processing pipeline for the Human Connectome Project (https://www.humanconnectome.org/software/hcp-mr-pipelines).

Laboratory of Integrated Brain Imaging

Laboratory of Integrated Brain Imaging group image

The Purdue University Research Repository (PURR) is a university core research facility provided by the Purdue University Libraries and the Office of the Executive Vice President for Research and Partnerships, with support from additional campus partners.