Cross-modal noise compensation in audiovisual words

M. Baart*, Blair C. Armstrong, Clara D. Martin, Ram Frost, Manuel Carreiras

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

4 Citations (Scopus)
100 Downloads (Pure)

Abstract

Perceiving linguistic input is vital for human functioning, but the process is complicated by the fact that the incoming signal is often degraded. However, humans can compensate for unimodal noise by relying on simultaneous sensory input from another modality. Here, we investigated noise-compensation for spoken and printed words in two experiments. In the first behavioral experiment, we observed that accuracy was modulated by reaction time, bias and sensitivity, but noise compensation could nevertheless be explained via accuracy differences when controlling for RT, bias and sensitivity. In the second experiment, we also measured Event Related Potentials (ERPs) and observed robust electrophysiological correlates of noise compensation starting at around 350 ms after stimulus onset, indicating that noise compensation is most prominent at lexical/semantic processing levels.
Original languageEnglish
Article number42055
JournalScientific Reports
Volume7
DOIs
Publication statusPublished - 2017

Fingerprint

Dive into the research topics of 'Cross-modal noise compensation in audiovisual words'. Together they form a unique fingerprint.

Cite this