Abstract
We present a corpus of spoken Dutch image descriptions, paired with two sets of eye-tracking data: free viewing, where participants look at images without any particular purpose, and description viewing, where we track eye movements while participants produce spoken descriptions of the images they are viewing. This paper describes the data collection procedure and the corpus itself, and provides an initial analysis of self-corrections in image descriptions. We also present two studies showing the potential of this data. Though these studies mainly serve as an example, we do find two interesting results: (1) the eye-tracking data for the description viewing task is more coherent than for the free-viewing task; (2) variation in image descriptions (also called image specificity; Jas and Parikh, 2015) is only moderately correlated across different languages. Our corpus can be used to gain a deeper understanding of the image description task, particularly how visual attention is correlated with the image description process.
Original language | English |
---|---|
Title of host publication | Proceedings of the 27th International Conference on Computational Linguistics |
Pages | 3658–3669 |
Number of pages | 12 |
Publication status | Published - 2018 |
Event | International Conference on Computational Linguistics 2018 - Santa Fe Community Convention Center, Santa Fe, United States Duration: 20 Aug 2018 → 26 Aug 2018 Conference number: 27 http://coling2018.org/ |
Conference
Conference | International Conference on Computational Linguistics 2018 |
---|---|
Abbreviated title | COLING 2018 |
Country/Territory | United States |
City | Santa Fe |
Period | 20/08/18 → 26/08/18 |
Internet address |