Multisensory integration in speech processing: Neural mechanisms of cross-modal aftereffects

N. Kilian-Hütten, Elia Formisano, J. Vroomen

Research output: Chapter in Book/Report/Conference proceedingChapterScientificpeer-review

237 Downloads (Pure)

Abstract

Traditionally, perceptual neuroscience has focused on unimodal information processing. This is true also for investigations of speech processing, where the auditory modality was the natural focus of interest. Given the complexity of neuronal processing, this was a logical step, considering that the field was still in its infancy. However, it is clear that this restriction does not do justice to the way we perceive the world around us in everyday interactions. Very rarely is sensory information confined to one modality. Instead, we are constantly confronted with a stream of input to several or all senses and already in infancy, we match facial movements with their corresponding sounds (Campbell et al. 2001; Kuhl and Meltzoff 1982). Moreover, the information that is processed by our individual senses does not stay separated. Rather, the different channels interact and influence each other, affecting perceptual interpretations and constructions (Calvert 2001). Consequently, in the last 15–20 years, the perspective in cognitive science and perceptual neuroscience has shifted to include investigations of such multimodal integrative phenomena. Facilitating cross-modal effects have consistently been demonstrated behaviorally (Shimojo and Shams 2001). When multisensory input is congruent (e.g., semantically and/or temporally) it typically lowers detection thresholds (Frassinetti et al. 2002), shortens reaction times (Forster et al. 2002; Schröger and Widmann 1998), and decreases saccadic eye movement latencies (Hughes et al. 1994) as compared to unimodal exposure. When incongruent input is (artificially) added in a second modality, this usually has opposite consequences (Sekuler et al. 1997).
Original languageEnglish
Title of host publicationNeural mechanisms of language
EditorsM. Mody
Place of PublicationBoston, MA
PublisherSpringer Science
Pages105-127
ISBN (Electronic)978-1-4939-7325-5
ISBN (Print)978-1-4939-7323-1
DOIs
Publication statusPublished - 2017

Fingerprint

Dive into the research topics of 'Multisensory integration in speech processing: Neural mechanisms of cross-modal aftereffects'. Together they form a unique fingerprint.

Cite this