No ‘self’ advantage for audiovisual speech aftereffects

Maria Modelska, Marie Pourquié, Martijn Baart*

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

1 Citation (Scopus)
79 Downloads (Pure)

Abstract

Although the default state of the world is that we see and hear other people talking, there is evidence that seeing and hearing ourselves rather than someone else may lead to visual (i.e., lip-read) or auditory ‘self’ advantages. We assessed whether there is a ‘self’ advantage for phonetic recalibration (a lip-read driven cross-modal learning effect) and selective adaptation (a contrastive effect in the opposite direction of recalibration). We observed both aftereffects as well as an on-line effect of lip-read information on auditory perception (i.e., immediate capture), but there was no evidence for a ‘self’ advantage in any of the tasks (as additionally supported by Bayesian statistics). These findings strengthen the emerging notion that recalibration reflects a general learning mechanism, and bolster the argument that adaptation depends on rather low-level auditory/acoustic features of the speech signal.
Original languageEnglish
Article number658
Number of pages10
JournalFrontiers in Psychology
Volume10
DOIs
Publication statusPublished - 2019

Keywords

  • AUDITORY SPEECH
  • ELECTROPHYSIOLOGICAL EVIDENCE
  • HEARING-LIPS
  • IDENTIFICATION
  • INFORMATION
  • LISTENERS
  • PERCEPTION
  • PHONETIC RECALIBRATION
  • SELECTIVE ADAPTATION
  • VISUAL SPEECH
  • adaptation
  • lip-reading
  • recalibration
  • self-advantage
  • speech perception

Fingerprint

Dive into the research topics of 'No ‘self’ advantage for audiovisual speech aftereffects'. Together they form a unique fingerprint.

Cite this