fix a few language nits and typos in the readme #6

Merged
aqw merged 1 commit from master into master 2021-05-11 06:27:28 +00:00

View file

@ -1,4 +1,4 @@
# Annotations of low-level perceptual confounds in the research cut of the audio-visual movie "Forrest Gump" and it's audio-description
# Annotations of low-level perceptual confounds in the research cut of the audio-visual movie "Forrest Gump" and its audio-description
[![made-with-datalad](https://www.datalad.org/badges/made_with.svg)](https://datalad.org)
[![PDDL-licensed](https://img.shields.io/badge/license-PDDL-blue.svg)](http://opendatacommons.org/licenses/pddl/summary)
@ -13,7 +13,7 @@ For further information about the project visit: http://studyforrest.org
Frame-wise (40 milliseconds) annotations of auditory and visual low-level confounds for each stimulus segment of the audio-description
and audio-visual movie (audio-description: e.g. `fg_ad_seg0_rms.tsv`; movie: e.g. `fg_av_ger_seg0_rms.tsv`).
One file of tap-separated values for every confound (providing onset, duration, and value of confound):
One file of tab-separated values for every confound (providing onset, duration, and value of confound):
* `audio/*_rms.tsv`: root-mean square power (a.k.a. volume)
* `audio/*_lrdiff.tsv`: left-right volume difference
* `visual/*_brmean.tsv`: mean brightness of a movie frame
@ -21,7 +21,7 @@ For further information about the project visit: http://studyforrest.org
* `visual/*_brud.tsv`: difference in brightness upper half minus lower half of each movie frame (a.k.a. "bring me that horizon")
* `visual/*_phash.tsv.`: perceptual hash of each movie frame (computed by the phash function of [imagehash v4.1.0](https://pypi.org/project/ImageHash/))
* `visual/*_normdiff.tsv`: normalized perceptual difference of each movie frame in respect to its previous movie frame
- ``code/``
@ -29,7 +29,7 @@ For further information about the project visit: http://studyforrest.org
- ``inputs/``
The segmented stimulus media files (Matroska Multimedia Container) of audio-description and audio-visual movie as used during fMRI scanning. Not publicly accessible.
The segmented stimulus media files (Matroska Multimedia Container) of the audio-description and audio-visual movie as used during fMRI scanning. Not publicly accessible.
## How to obtain the data files
@ -37,8 +37,8 @@ This repository is a [DataLad](https://www.datalad.org/) dataset. It provides
fine-grained data access down to the level of individual files, and allows for
tracking future updates. In order to use this repository for data retrieval,
[DataLad](https://www.datalad.org/) is required. It is a free and
open source command line tool, available for all major operating
systems, and builds up on Git and [git-annex](https://git-annex.branchable.com/)
open source command line tool available for all major operating
systems, and builds upon Git and [git-annex](https://git-annex.branchable.com/)
to allow sharing, synchronizing, and version controlling collections of
large files. You can find information on how to install DataLad at
[handbook.datalad.org/intro/installation.html](http://handbook.datalad.org/intro/installation.html).
@ -53,7 +53,7 @@ datalad clone <url>
Once a dataset is cloned, it is a light-weight directory on your local machine.
At this point, it contains only small metadata and information on the
identity of the files in the dataset, but not actual *content* of the
identity of the files in the dataset, but not the actual *content* of the
(sometimes large) data files.
### Retrieve dataset content
@ -67,7 +67,7 @@ datalad get <path/to/directory/or/file>
This command will trigger a download of the files, directories, or
subdatasets you have specified.
DataLad datasets can contain other datasets, so called *subdatasets*.
DataLad datasets can contain other datasets (so called *subdatasets*).
If you clone the top-level dataset, subdatasets do not yet contain
metadata and information on the identity of files, but appear to be
empty directories. In order to retrieve file availability metadata in