Language-driven anticipatory eye movements in virtual reality

Nicole Eichert, David Peeters*, Peter Hagoort

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

2 Citations (Scopus)

Abstract

Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.

Original languageEnglish
Pages (from-to)1102-1115
Number of pages14
JournalBehavior Research Methods
Volume50
Issue number3
DOIs
Publication statusPublished - Jun 2018
Externally publishedYes

Keywords

  • Virtual Reality
  • Prediction
  • Language Comprehension
  • Eyetracking
  • Visual World
  • SPOKEN WORD RECOGNITION
  • VISUAL WORLD
  • TIME-COURSE
  • COMPREHENSION
  • PREDICTION
  • FIXATION
  • MODELS
  • INFORMATION
  • SPEECH
  • INTEGRATION

Cite this