"data" updates #31

Merged
mih merged 2 commits from about into uplift 2025-08-19 08:44:07 +00:00

View file

@ -24,6 +24,10 @@
<h3>Behavior and Brain Function</h3>
<p>A diverse set of stimulation paradigms and data acquisition setups were
utilized to characterize participant's brain function on a variety of
dimensions.</p>
<div class="card-data">
<div class='thumbnail'>
<img src="/img/data/thumb_moviefmri_acq_ao_hrfmri.jpg" />
@ -32,7 +36,7 @@
<h3>High-res 7T fMRI on 2h audio movie (+cardiac/respiration)</h3>
<p class="info">
<span class ='icon-doc-text'></span>
<a href="http://www.nature.com/articles/sdata20143">Publication</a>
<a href="https://doi.org/10.1038/sdata.2014.3">Publication</a>
<span class ='icon-flashlight'></span><a href="/explore.html">Explore</a>
<span class ='icon-tags'></span>7T, audio, cardiac, respiration
</p>
@ -52,7 +56,7 @@
<h3>High-res 7T fMRI listening to music (cardiac/respiration)</h3>
<p class="info">
<span class ='icon-doc-text'></span>
<a href="http://dx.doi.org/10.12688/f1000research.6679.1">Publication</a>
<a href="https://doi.org/10.12688/f1000research.6679.1">Publication</a>
<span class ='icon-flashlight'></span><a href="/explore.html">Explore</a>
<span class ='icon-tags'></span>7T, music, cardiac, respiration
</p>
@ -73,12 +77,12 @@
<h3>3T fMRI on 2h movie, eyegaze, cardiac/respiration</h3>
<p class="info">
<span class ='icon-doc-text'></span>
<a href="http://www.nature.com/articles/sdata201692">Publication</a>
<a href="https://doi.org/10.1038/sdata.2016.92">Publication</a>
<span class ='icon-tags'></span>3T, audio, eyegaze, cardiac, respiration
</p>
<p>Two-hour 3 Tesla fMRI acquisition while 15 participants were shown an
audio-visual version of the stimulus motion picture, simultaneously
recording eye gaze location.</p>
recording eye gaze location, heart beat and breathing.</p>
</div>
</div>
@ -90,10 +94,14 @@
<h3>Retinotopic mapping</h3>
<p class="info">
<span class ='icon-doc-text'></span>
<a href="http://www.nature.com/articles/sdata201693">Publication</a>
<span class ='icon-tags'></span>3T, retinotopic mapping
<a href="https://doi.org/10.1038/sdata.2016.93">Publication</a>
<span class ='icon-tags'></span>3T, retinotopic mapping, visual cortex, eccentricity, polar angle
</p>
<p>Standard 3 mm fMRI recording of a retinotopic mapping procedure with
expanding and contracting ring and rotating wedge stimuli. Resulting
eccentricity and polar angle maps of the visual cortex of 15 participants
are available.
</p>
<p></p>
</div>
</div>
@ -105,10 +113,13 @@
<h3>Higher visual area localizer</h3>
<p class="info">
<span class ='icon-doc-text'></span>
<a href="http://www.nature.com/articles/sdata201693">Publication</a>
<span class ='icon-tags'></span>3T, localizer
<a href="https://doi.org/10.1038/sdata.2016.93">Publication</a>
<span class ='icon-tags'></span>3T, visual area localizer, block-design, one-back task
</p>
<p></p>
<p>3mm fMRI data from a standard block-design visual area localizer using grayscale
images for the stimulus categories human faces, human bodies without heads, small
objects, houses and outdoor scenes comprising of nature and street scenes, and phase
scrambled images</p>
</div>
</div>
@ -117,20 +128,24 @@
<img src="/img/data/thumb_orientfmri_acq.jpg" />
</div>
<div class='description'>
<h3>Multi-res 7T fMRI (0.8-3mm) on visual orientation</h3>
<h3>Multi-res 3T/7T fMRI (0.8-3mm) on visual orientation</h3>
<p class="info">
<span class ='icon-doc-text'></span>
<a href="https://doi.org/10.1016/j.neuroimage.2016.12.040">Publication</a>
<span class ='icon-tags'></span>7T, decoding
<span class ='icon-tags'></span>7T, 3T, visual, oriented gratings, decoding
</p>
<p>Ultra high-field fMRI data recorded at four spatial resolutions (0.8
mm, 1.4 mm, 2 mm, and 3 mm isotropic voxel size) for orientation
decoding in visual cortex.</p>
<p>Ultra high-field 7T and 3T fMRI data recorded at 0.8 (7T-only), 1.4, 2,
and 3 mm isotropic voxel size under stimulation with flickering, oriented
grating stimuli. Grating orientation in the left and right visual field
varied independently to enable decoding analyses.</p>
</div>
</div>
<h3>Brain Structure and Connectivity</h3>
<p>A versatile set of structural brain images are available to provide a
comprehensive in-vivo assessment of all participant's brain hardware.</p>
<div class="card-data">
<div class='thumbnail'>
<img src="/img/data/thumb_t1w.jpg" />
@ -139,7 +154,7 @@
<h3>T1-weighted MRI</h3>
<p class="info">
<span class ='icon-doc-text'></span>
<a href="https://www.nature.com/articles/sdata20143">Publication</a>
<a href="https://doi.org/10.1038/sdata.2014.3">Publication</a>
<span class ='icon-tags'></span>3T, T1
</p>
<p>An image with 274 sagittal slices (FoV 191.8×256×256mm) and an
@ -159,7 +174,7 @@
<h3>T2-weighted MRI</h3>
<p class="info">
<span class ='icon-doc-text'></span>
<a href="https://www.nature.com/articles/sdata20143">Publication</a>
<a href="https://doi.org/10.1038/sdata.2014.3">Publication</a>
<span class ='icon-tags'></span>3T, T2
</p>
<p>A 3D turbo spin-echo (TSE) sequence (TR 2500ms, TEeff 230ms, strong
@ -178,7 +193,7 @@
<h3>Susceptibility-weighted MRI</h3>
<p class="info">
<span class ='icon-doc-text'></span>
<a href="https://www.nature.com/articles/sdata20143">Publication</a>
<a href="https://doi.org/10.1038/sdata.2014.3">Publication</a>
<span class ='icon-tags'></span>3T, SWI
</p>
<p>An image with 500 axial slices (thickness 0.35mm, FoV 181×202×175mm)
@ -198,7 +213,7 @@
<h3>Diffusion-weighted MRI</h3>
<p class="info">
<span class ='icon-doc-text'></span>
<a href="https://www.nature.com/articles/sdata20143">Publication</a>
<a href="https://doi.org/10.1038/sdata.2014.3">Publication</a>
<span class ='icon-flashlight'></span><a href="/explore.html">Explore</a>
<span class ='icon-tags'></span>3T, DTI
</p>
@ -221,7 +236,7 @@
<h3>Angiography</h3>
<p class="info">
<span class ='icon-doc-text'></span>
<a href="https://www.nature.com/articles/sdata20143">Publications</a>
<a href="https://doi.org/10.1038/sdata.2014.3">Publication</a>
<span class ='icon-tags'>7T, angiography
</p>
<p>A 3D multi-slab time-of-flight angiography was recorded at 7 Tesla for
@ -240,7 +255,7 @@
<h3>Cortical surface reconstruction</h3>
<p class="info">
<span class ='icon-doc-text'></span>
<a href="https://www.nature.com/articles/sdata20143">Publication</a>
<a href="https://doi.org/10.1038/sdata.2014.3">Publication</a>
<span class ='icon-flashlight'></span><a href="/explore.html">Explore</a>
<span class ='icon-tags'>derivative
</p>
@ -252,21 +267,21 @@
</div>
<h3>Movie Stimulus Annotations</h3>
<p>We are continuously expanding the set of annotations for particular movie
properties. The following items show a subset of what we have already in
store. If you are planning on using any of these, or you are looking for
annotations for different properties, please get in touch. We are
constantly working on improving existing annotations as well, and updates
may already be available.</p>
<p>Annotating the content of the Forrest Gump movie, the key stimulus used
in the project, is an open-ended endeavour. Its naturalistic nature,
rich in diverse visual and auditory features, but also in facets of
social communication enables and requires a versatile description.</p>
<div class="card-data">
<div class='thumbnail'>
<img src="/img/data/thumb_annot_structure.png" />
</div>
<div class='description'>
<h3>Scenes and Shots</h3>
<h3>Location Changes and Time Progression</h3>
<p class="info">
<span class ='icon-tags'>movie, annotation
<span class ='icon-doc-text'></span>
<a href="https://doi.org/10.12688/f1000research.9536.1">Publication</a>
<span class ='icon-tags'>movie, annotation, cut, scenes, time progression, location
</p>
<p>Start and end times for all scenes in the cut of the movie that is used
as the stimulus. In addition, each annotation contains whether a scene
@ -284,9 +299,14 @@
<div class='description'>
<h3>Speech</h3>
<p class="info">
<span class ='icon-tags'>movie, annotation
<span class ='icon-doc-text'></span>
<a href="https://doi.org/10.12688/f1000research.27621.1">Publication</a>
<span class ='icon-tags'>movie, annotation, speech, grammar, word, dialog
</p>
<p>Information on the speech content of the movie.</p>
<p>The exact timing of each of the more than 2500 spoken sentences,
16000 words (including 202 non-speech vocalizations), 66000 phonemes,
and their corresponding speaker. Additionally, every word is associated
with a grammatical category, and its syntactic dependencies are defined.</p>
</div>
</div>
@ -298,10 +318,14 @@
<h3>Portrayed Emotions</h3>
<p class="info">
<span class ='icon-doc-text'></span>
<a href="http://dx.doi.org/10.12688/f1000research.6230.1">Publication</a>
<span class ='icon-tags'>movie, annotation
<a href="https://doi.org/10.12688/f1000research.6230.1">Publication</a>
<span class ='icon-tags'>movie, annotation, emotion, arousal, valence, cue
</p>
<p>A description of portrayed emotions from the movie.</p>
<p>Description of portrayed emotions in the movie and the audio
description stimulus. The nature of an emotion is characterized with
basic attributes, such as onset, duration, arousal and valence, as well
as explicit emotion category labels, and a record of the perceptual evidence
for the presence of an emotion.</p>
</div>
</div>
@ -313,10 +337,11 @@
<h3>Semantic Conflict</h3>
<p class="info">
<span class ='icon-doc-text'></span>
<a href="https://f1000research.com/articles/5-2375">Publication</a>
<span class ='icon-tags'>movie, annotation
<a href="https://doi.org/10.12688/f1000research.9635.1">Publication</a>
<span class ='icon-tags'>movie, annotation, lies, irony
</p>
<p>A description of semantic conflict in the movie.</p>
<p>Identification of episodes with portrayal of lies, irony or sarcasm by
three independent observers.</p>
</div>
</div>
@ -328,7 +353,7 @@
<h3>Location Changes and Time Progression</h3>
<p class="info">
<span class ='icon-doc-text'></span>
<a href="http://f1000research.com/articles/5-2273">Publication</a>
<a href="https://doi.org/10.12688/f1000research.9536.1">Publication</a>
<span class ='icon-tags'>movie, annotation
</p>
<p>An annotation of location and temporal progression in the movie.</p>
@ -342,9 +367,14 @@
<div class='description'>
<h3>Body Contact</h3>
<p class="info">
<span class ='icon-tags'>movie, annotation
<span class ='icon-doc-text'></span>
<a href="https://github.com/psychoinformatics-de/studyforrest-paper-bodycontactannotation/blob/master/paper/p.tex">Description</a>
<span class ='icon-tags'>movie, annotation, body parts, touch, body language
</p>
<p>A detailed description of all body contact events in the movie, including
timing, actor and recipient, body parts involved, intensity and valence of the
touch, and any potential audio cues.
</p>
<p></p>
</div>
</div>
@ -355,9 +385,15 @@
<div class='description'>
<h3>Eye Movement Labels</h3>
<p class="info">
<span class ='icon-tags'>movie, annotation
<span class ='icon-doc-text'></span>
<a href="https://doi.org/10.3758/s13428-020-01428-x">Publication</a>
<span class ='icon-tags'>movie, annotation, eye gaze, saccades, fixations, smooth pursuit
</p>
<p>Classification of eye movements for two groups of 15 particpants watching the
movie, one group inside an MRI scanner, and another group in a laboratory setting.
Saccades, post-saccadic oscillations, fixations, and smooth pursuit events are
distinguish.
</p>
<p></p>
</div>
</div>
@ -368,19 +404,17 @@
<div class='description'>
<h3>Music</h3>
<p class="info">
<span class ='icon-tags'>music, annotations</p>
<p></p>
<span class ='icon-tags'>music, annotations, soundtrack, music</p>
<p>Timing and identity of every musical piece in the movie's soundtrack.
</p>
</div>
</div>
<h2>Subject Overview</h2>
<h2>Participant/Acquisition Overview</h2>
<p>The following table shows what data are available for each participant.
Participant IDs are consistent across all acquisitions. Availability codes
in the table are as follows: **X**: released; **A**: acquired, but not yet
released, **P**: planned. An empty cell indicates that an acquisition was
neither done nor is planned.</p>
Participant IDs are consistent across all acquisitions.</p>
<p>Raw and preprocessed data were, and continue to be, released in several
<p>Raw and preprocessed data were released in several
datasets (parts), and are typically available from multiple locations, were
some may provide further updates. If you cannot locate a dataset component
you are interested in, please get in touch. Likewise, if you want to
@ -424,7 +458,7 @@
<td>X</td>
<td>X</td>
<td>X</td>
<td>A</td>
<td>X</td>
<td> </td>
</tr>
</tbody>
@ -514,7 +548,7 @@
<td>X</td>
<td>X</td>
<td>X</td>
<td>A</td>
<td>X</td>
<td> </td>
</tr>
<tr><th>Localizer for visual areas (3T fMRI)</th>
@ -553,16 +587,16 @@
<td> </td>
<td> </td>
<td> </td>
<td>A</td>
<td>X</td>
<td> </td>
<td>A</td>
<td>X</td>
<td> </td>
<td> </td>
<td>A</td>
<td>X</td>
<td> </td>
<td> </td>
<td>A</td>
<td>A</td>
<td>X</td>
<td>X</td>
<td> </td>
</tr>
</tbody>