Compare to one algorithm to rule them all #16

Closed
opened 2018-09-04 09:52:08 +00:00 by mih · 12 comments
mih commented 2018-09-04 09:52:08 +00:00 (Migrated from github.com)
https://link.springer.com/article/10.3758/s13428-016-0738-9
AsimHDar commented 2018-09-04 10:24:47 +00:00 (Migrated from github.com)
https://github.com/richardandersson/EyeMovementDetectorEvaluation
AsimHDar commented 2018-09-06 11:41:09 +00:00 (Migrated from github.com)

Their main script is broken from what I can tell. Has missing functions.

Their Data are labelled weirdly I can't make sense of it. For example there are 34 files (also stated in the paper) but 11 are labelled by one human coder and the rest by the other. I don't get how ultimately their outputs will be compared.

Their main script is broken from what I can tell. Has missing functions. Their Data are labelled weirdly I can't make sense of it. For example there are 34 files (also stated in the paper) but 11 are labelled by one human coder and the rest by the other. I don't get how ultimately their outputs will be compared.
AsimHDar commented 2018-09-07 15:54:02 +00:00 (Migrated from github.com)

Going to see if this set of data is any better:

http://michaeldorr.de/smoothpursuit/ECEM_poster.pdf

Going to see if this set of data is any better: http://michaeldorr.de/smoothpursuit/ECEM_poster.pdf
mih commented 2018-09-08 04:37:14 +00:00 (Migrated from github.com)

You could look. which coordinate time series are the same and ignore the file names.

You could look. which coordinate time series are the same and ignore the file names.
AsimHDar commented 2018-09-10 15:56:08 +00:00 (Migrated from github.com)

FYI: the files end with RA or MN. These are the names of the annotators ("human coders")

FYI: the files end with RA or MN. These are the names of the annotators ("human coders")
mih commented 2018-09-10 16:52:51 +00:00 (Migrated from github.com)

Overall the performance is pretty good -- primarily on the video samples. Our algorithm is working better on longer recordings. The primary shortcoming wrt the hand-labeled data is this: we can only label a single event between to saccades, but humans label fixation-pursuit-fixation combos. Here is an example (we are on top, theirs labels are below):

image

Overall the performance is pretty good -- primarily on the video samples. Our algorithm is working better on longer recordings. The primary shortcoming wrt the hand-labeled data is this: we can only label a single event between to saccades, but humans label fixation-pursuit-fixation combos. Here is an example (we are on top, theirs labels are below): ![image](https://user-images.githubusercontent.com/136479/45311836-aff31f00-b52a-11e8-990a-c539c381caca.png)
mih commented 2018-09-11 13:34:55 +00:00 (Migrated from github.com)

The primary shortcoming wrt the hand-labeled data is this: we can only label a single event between to saccades, but humans label fixation-pursuit-fixation combos.

I added this ability now.

> The primary shortcoming wrt the hand-labeled data is this: we can only label a single event between to saccades, but humans label fixation-pursuit-fixation combos. I added this ability now.
mih commented 2018-09-11 15:15:43 +00:00 (Migrated from github.com)
Relevant paper on smooth pursuits; https://link.springer.com/article/10.3758/s13428-012-0234-9
mih commented 2018-09-12 08:04:51 +00:00 (Migrated from github.com)

@ElectronicTeaCup can we have the labeling output of the winning algorithm? We could compare our's against that, in addition to the human raters.

@ElectronicTeaCup can we have the labeling output of the winning algorithm? We could compare our's against that, in addition to the human raters.
AsimHDar commented 2018-09-16 15:56:05 +00:00 (Migrated from github.com)

ed9959e has the outputs from the NH algorithm which ranked the best for detecting fixations. Going to see if I can get outputs from LNS algorithm (best for finding saccades) --- and determine any others that might be of interest to us.

ed9959e has the outputs from the NH algorithm which ranked the best for detecting fixations. Going to see if I can get outputs from LNS algorithm (best for finding saccades) --- and determine any others that might be of interest to us.
AsimHDar commented 2018-09-18 20:39:50 +00:00 (Migrated from github.com)

Can't find the Larsson algorithm. Anyone else have any luck? I am thinking we use the NH (for fixations), LNS(For saccades and PSOs).

The paper (one algorithm to rule them all) also says that for fixations in dynamic stimuli algorithms IKF
and BIT are the best (score of 0.14 in Cohen's Kappa --- coderRA: 0.82). Think we need to put these in for that one particular comparison?

Can't find the Larsson algorithm. Anyone else have any luck? I am thinking we use the NH (for fixations), LNS(For saccades and PSOs). The paper (one algorithm to rule them all) also says that for fixations in dynamic stimuli algorithms IKF and BIT are the best (score of 0.14 in Cohen's Kappa --- coderRA: 0.82). Think we need to put these in for that one particular comparison?
mih commented 2019-04-23 07:05:07 +00:00 (Migrated from github.com)
Done in https://github.com/psychoinformatics-de/paper-remodnav now.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
studyforrest/data-eyemovementlabels#16
No description provided.