No ‘self’ advantage for audiovisual speech aftereffects

Maria Modelska, Marie Pourquié, Martijn Baart

Research output: Contribution to journalArticleScientificpeer-review

Abstract

Although the default state of the world is that we see and hear other people talking, there is evidence that seeing and hearing ourselves rather than someone else may lead to visual (i.e., lip-read) or auditory ‘self’ advantages. We assessed whether there is a ‘self’ advantage for phonetic recalibration (a lip-read driven cross-modal learning effect) and selective adaptation (a contrastive effect in the opposite direction of recalibration). We observed both aftereffects as well as an on-line effect of lip-read information on auditory perception (i.e., immediate capture), but there was no evidence for a ‘self’ advantage in any of the tasks (as additionally supported by Bayesian statistics). These findings strengthen the emerging notion that recalibration reflects a general learning mechanism, and bolster the argument that adaptation depends on rather low-level auditory/acoustic features of the speech signal.
Original languageEnglish
Article number658
Number of pages10
JournalFrontiers in Psychology
Volume10
DOIs
Publication statusPublished - 2019

Fingerprint

Phonetics
Hearing

Keywords

  • AUDITORY SPEECH
  • ELECTROPHYSIOLOGICAL EVIDENCE
  • HEARING-LIPS
  • IDENTIFICATION
  • INFORMATION
  • LISTENERS
  • PERCEPTION
  • PHONETIC RECALIBRATION
  • SELECTIVE ADAPTATION
  • VISUAL SPEECH
  • adaptation
  • lip-reading
  • recalibration
  • self-advantage
  • speech perception

Cite this

@article{0a371d4b0a8a4b17a1d17e298dd05418,
title = "No ‘self’ advantage for audiovisual speech aftereffects",
abstract = "Although the default state of the world is that we see and hear other people talking, there is evidence that seeing and hearing ourselves rather than someone else may lead to visual (i.e., lip-read) or auditory ‘self’ advantages. We assessed whether there is a ‘self’ advantage for phonetic recalibration (a lip-read driven cross-modal learning effect) and selective adaptation (a contrastive effect in the opposite direction of recalibration). We observed both aftereffects as well as an on-line effect of lip-read information on auditory perception (i.e., immediate capture), but there was no evidence for a ‘self’ advantage in any of the tasks (as additionally supported by Bayesian statistics). These findings strengthen the emerging notion that recalibration reflects a general learning mechanism, and bolster the argument that adaptation depends on rather low-level auditory/acoustic features of the speech signal.",
keywords = "AUDITORY SPEECH, ELECTROPHYSIOLOGICAL EVIDENCE, HEARING-LIPS, IDENTIFICATION, INFORMATION, LISTENERS, PERCEPTION, PHONETIC RECALIBRATION, SELECTIVE ADAPTATION, VISUAL SPEECH, adaptation, lip-reading, recalibration, self-advantage, speech perception",
author = "Maria Modelska and Marie Pourqui{\'e} and Martijn Baart",
year = "2019",
doi = "10.3389/fpsyg.2019.00658",
language = "English",
volume = "10",
journal = "Frontiers in Psychology",
issn = "1664-1078",
publisher = "Frontiers Media S.A.",

}

No ‘self’ advantage for audiovisual speech aftereffects. / Modelska, Maria; Pourquié, Marie; Baart, Martijn.

In: Frontiers in Psychology, Vol. 10, 658, 2019.

Research output: Contribution to journalArticleScientificpeer-review

TY - JOUR

T1 - No ‘self’ advantage for audiovisual speech aftereffects

AU - Modelska, Maria

AU - Pourquié, Marie

AU - Baart, Martijn

PY - 2019

Y1 - 2019

N2 - Although the default state of the world is that we see and hear other people talking, there is evidence that seeing and hearing ourselves rather than someone else may lead to visual (i.e., lip-read) or auditory ‘self’ advantages. We assessed whether there is a ‘self’ advantage for phonetic recalibration (a lip-read driven cross-modal learning effect) and selective adaptation (a contrastive effect in the opposite direction of recalibration). We observed both aftereffects as well as an on-line effect of lip-read information on auditory perception (i.e., immediate capture), but there was no evidence for a ‘self’ advantage in any of the tasks (as additionally supported by Bayesian statistics). These findings strengthen the emerging notion that recalibration reflects a general learning mechanism, and bolster the argument that adaptation depends on rather low-level auditory/acoustic features of the speech signal.

AB - Although the default state of the world is that we see and hear other people talking, there is evidence that seeing and hearing ourselves rather than someone else may lead to visual (i.e., lip-read) or auditory ‘self’ advantages. We assessed whether there is a ‘self’ advantage for phonetic recalibration (a lip-read driven cross-modal learning effect) and selective adaptation (a contrastive effect in the opposite direction of recalibration). We observed both aftereffects as well as an on-line effect of lip-read information on auditory perception (i.e., immediate capture), but there was no evidence for a ‘self’ advantage in any of the tasks (as additionally supported by Bayesian statistics). These findings strengthen the emerging notion that recalibration reflects a general learning mechanism, and bolster the argument that adaptation depends on rather low-level auditory/acoustic features of the speech signal.

KW - AUDITORY SPEECH

KW - ELECTROPHYSIOLOGICAL EVIDENCE

KW - HEARING-LIPS

KW - IDENTIFICATION

KW - INFORMATION

KW - LISTENERS

KW - PERCEPTION

KW - PHONETIC RECALIBRATION

KW - SELECTIVE ADAPTATION

KW - VISUAL SPEECH

KW - adaptation

KW - lip-reading

KW - recalibration

KW - self-advantage

KW - speech perception

U2 - 10.3389/fpsyg.2019.00658

DO - 10.3389/fpsyg.2019.00658

M3 - Article

VL - 10

JO - Frontiers in Psychology

JF - Frontiers in Psychology

SN - 1664-1078

M1 - 658

ER -