DIDEC: The Dutch Image Description and Eye-tracking Corpus

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

126 Downloads (Pure)


We present a corpus of spoken Dutch image descriptions, paired with two sets of eye-tracking data: free viewing, where participants look at images without any particular purpose, and description viewing, where we track eye movements while participants produce spoken descriptions of the images they are viewing. This paper describes the data collection procedure and the corpus itself, and provides an initial analysis of self-corrections in image descriptions. We also present two studies showing the potential of this data. Though these studies mainly serve as an example, we do find two interesting results: (1) the eye-tracking data for the description viewing task is more coherent than for the free-viewing task; (2) variation in image descriptions (also called image specificity; Jas and Parikh, 2015) is only moderately correlated across different languages. Our corpus can be used to gain a deeper understanding of the image description task, particularly how visual attention is correlated with the image description process.
Original languageEnglish
Title of host publicationProceedings of the 27th International Conference on Computational Linguistics
Number of pages12
Publication statusPublished - 2018
EventInternational Conference on Computational Linguistics 2018 - Santa Fe Community Convention Center, Santa Fe, United States
Duration: 20 Aug 201826 Aug 2018
Conference number: 27


ConferenceInternational Conference on Computational Linguistics 2018
Abbreviated titleCOLING 2018
Country/TerritoryUnited States
CitySanta Fe
Internet address


Dive into the research topics of 'DIDEC: The Dutch Image Description and Eye-tracking Corpus'. Together they form a unique fingerprint.

Cite this