Language-driven anticipatory eye movements in virtual reality

Nicole Eichert, David Peeters*, Peter Hagoort

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

Abstract

Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.

Original languageEnglish
Pages (from-to)1102-1115
Number of pages14
JournalBehavior Research Methods
Volume50
Issue number3
DOIs
Publication statusPublished - Jun 2018
Externally publishedYes

Keywords

  • Virtual Reality
  • Prediction
  • Language Comprehension
  • Eyetracking
  • Visual World
  • SPOKEN WORD RECOGNITION
  • VISUAL WORLD
  • TIME-COURSE
  • COMPREHENSION
  • PREDICTION
  • FIXATION
  • MODELS
  • INFORMATION
  • SPEECH
  • INTEGRATION

Cite this

Eichert, Nicole ; Peeters, David ; Hagoort, Peter. / Language-driven anticipatory eye movements in virtual reality. In: Behavior Research Methods. 2018 ; Vol. 50, No. 3. pp. 1102-1115.
@article{894c8cf034cf42b795c588d67e583059,
title = "Language-driven anticipatory eye movements in virtual reality",
abstract = "Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.",
keywords = "Virtual Reality, Prediction, Language Comprehension, Eyetracking, Visual World, SPOKEN WORD RECOGNITION, VISUAL WORLD, TIME-COURSE, COMPREHENSION, PREDICTION, FIXATION, MODELS, INFORMATION, SPEECH, INTEGRATION",
author = "Nicole Eichert and David Peeters and Peter Hagoort",
year = "2018",
month = "6",
doi = "10.3758/s13428-017-0929-z",
language = "English",
volume = "50",
pages = "1102--1115",
journal = "Behavior Research Methods",
issn = "1554-351X",
publisher = "Springer",
number = "3",

}

Language-driven anticipatory eye movements in virtual reality. / Eichert, Nicole; Peeters, David; Hagoort, Peter.

In: Behavior Research Methods, Vol. 50, No. 3, 06.2018, p. 1102-1115.

Research output: Contribution to journalArticleScientificpeer-review

TY - JOUR

T1 - Language-driven anticipatory eye movements in virtual reality

AU - Eichert, Nicole

AU - Peeters, David

AU - Hagoort, Peter

PY - 2018/6

Y1 - 2018/6

N2 - Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.

AB - Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.

KW - Virtual Reality

KW - Prediction

KW - Language Comprehension

KW - Eyetracking

KW - Visual World

KW - SPOKEN WORD RECOGNITION

KW - VISUAL WORLD

KW - TIME-COURSE

KW - COMPREHENSION

KW - PREDICTION

KW - FIXATION

KW - MODELS

KW - INFORMATION

KW - SPEECH

KW - INTEGRATION

U2 - 10.3758/s13428-017-0929-z

DO - 10.3758/s13428-017-0929-z

M3 - Article

VL - 50

SP - 1102

EP - 1115

JO - Behavior Research Methods

JF - Behavior Research Methods

SN - 1554-351X

IS - 3

ER -