Listening beyond seeing: Event-related potentials to audiovisual processing in visual narrative

Mirella Manfredi, Neil Cohn, Mariana De Araújo Andreoli, Paulo Sergio Boggio

Research output: Contribution to journalArticleScientificpeer-review

1 Downloads (Pure)

Abstract

Every day we integrate meaningful information coming from different sensory modalities, and previous work has debated whether conceptual knowledge is represented in modality-specific neural stores specialized for specific types of information, and/or in an amodal, shared system. In the current study, we investigated semantic processing through a cross-modal paradigm which asked whether auditory semantic processing could be modulated by the constraints of context built up across a meaningful visual narrative sequence. We recorded event-related brain potentials (ERPs) to auditory words and sounds associated to events in visual narratives-i.e., seeing images of someone spitting while hearing either a word (Spitting!) or a sound (the sound of spitting)-which were either semantically congruent or incongruent with the climactic visual event. Our results showed that both incongruent sounds and words evoked an N400 effect, however, the distribution of the N400 effect to words (centro-parietal) differed from that of sounds (frontal). In addition, words had an earlier latency N400 than sounds. Despite these differences, a sustained late frontal negativity followed the N400s and did not differ between modalities. These results support the idea that semantic memory balances a distributed cortical network accessible from multiple modalities, yet also engages amodal processing insensitive to specific modalities.

Original languageEnglish
Pages (from-to)1-8
Number of pages8
JournalBrain and Language
Volume185
DOIs
Publication statusPublished - 2018

Fingerprint

semantics
narrative
event
brain
paradigm
Hearing
Visual Narrative
Sound
Event-related Potentials
Modality
Semantic Processing

Cite this

Manfredi, Mirella ; Cohn, Neil ; De Araújo Andreoli, Mariana ; Boggio, Paulo Sergio. / Listening beyond seeing : Event-related potentials to audiovisual processing in visual narrative. In: Brain and Language. 2018 ; Vol. 185. pp. 1-8.
@article{d6f5a0bb4edc4478accb2ac27b566b63,
title = "Listening beyond seeing: Event-related potentials to audiovisual processing in visual narrative",
abstract = "Every day we integrate meaningful information coming from different sensory modalities, and previous work has debated whether conceptual knowledge is represented in modality-specific neural stores specialized for specific types of information, and/or in an amodal, shared system. In the current study, we investigated semantic processing through a cross-modal paradigm which asked whether auditory semantic processing could be modulated by the constraints of context built up across a meaningful visual narrative sequence. We recorded event-related brain potentials (ERPs) to auditory words and sounds associated to events in visual narratives-i.e., seeing images of someone spitting while hearing either a word (Spitting!) or a sound (the sound of spitting)-which were either semantically congruent or incongruent with the climactic visual event. Our results showed that both incongruent sounds and words evoked an N400 effect, however, the distribution of the N400 effect to words (centro-parietal) differed from that of sounds (frontal). In addition, words had an earlier latency N400 than sounds. Despite these differences, a sustained late frontal negativity followed the N400s and did not differ between modalities. These results support the idea that semantic memory balances a distributed cortical network accessible from multiple modalities, yet also engages amodal processing insensitive to specific modalities.",
author = "Mirella Manfredi and Neil Cohn and {De Ara{\'u}jo Andreoli}, Mariana and Boggio, {Paulo Sergio}",
note = "Copyright {\circledC} 2018 Elsevier Inc. All rights reserved.",
year = "2018",
doi = "10.1016/j.bandl.2018.06.008",
language = "English",
volume = "185",
pages = "1--8",
journal = "Brain and Language",
issn = "0093-934X",
publisher = "Academic Press Inc.",

}

Listening beyond seeing : Event-related potentials to audiovisual processing in visual narrative. / Manfredi, Mirella; Cohn, Neil; De Araújo Andreoli, Mariana; Boggio, Paulo Sergio.

In: Brain and Language, Vol. 185, 2018, p. 1-8.

Research output: Contribution to journalArticleScientificpeer-review

TY - JOUR

T1 - Listening beyond seeing

T2 - Event-related potentials to audiovisual processing in visual narrative

AU - Manfredi, Mirella

AU - Cohn, Neil

AU - De Araújo Andreoli, Mariana

AU - Boggio, Paulo Sergio

N1 - Copyright © 2018 Elsevier Inc. All rights reserved.

PY - 2018

Y1 - 2018

N2 - Every day we integrate meaningful information coming from different sensory modalities, and previous work has debated whether conceptual knowledge is represented in modality-specific neural stores specialized for specific types of information, and/or in an amodal, shared system. In the current study, we investigated semantic processing through a cross-modal paradigm which asked whether auditory semantic processing could be modulated by the constraints of context built up across a meaningful visual narrative sequence. We recorded event-related brain potentials (ERPs) to auditory words and sounds associated to events in visual narratives-i.e., seeing images of someone spitting while hearing either a word (Spitting!) or a sound (the sound of spitting)-which were either semantically congruent or incongruent with the climactic visual event. Our results showed that both incongruent sounds and words evoked an N400 effect, however, the distribution of the N400 effect to words (centro-parietal) differed from that of sounds (frontal). In addition, words had an earlier latency N400 than sounds. Despite these differences, a sustained late frontal negativity followed the N400s and did not differ between modalities. These results support the idea that semantic memory balances a distributed cortical network accessible from multiple modalities, yet also engages amodal processing insensitive to specific modalities.

AB - Every day we integrate meaningful information coming from different sensory modalities, and previous work has debated whether conceptual knowledge is represented in modality-specific neural stores specialized for specific types of information, and/or in an amodal, shared system. In the current study, we investigated semantic processing through a cross-modal paradigm which asked whether auditory semantic processing could be modulated by the constraints of context built up across a meaningful visual narrative sequence. We recorded event-related brain potentials (ERPs) to auditory words and sounds associated to events in visual narratives-i.e., seeing images of someone spitting while hearing either a word (Spitting!) or a sound (the sound of spitting)-which were either semantically congruent or incongruent with the climactic visual event. Our results showed that both incongruent sounds and words evoked an N400 effect, however, the distribution of the N400 effect to words (centro-parietal) differed from that of sounds (frontal). In addition, words had an earlier latency N400 than sounds. Despite these differences, a sustained late frontal negativity followed the N400s and did not differ between modalities. These results support the idea that semantic memory balances a distributed cortical network accessible from multiple modalities, yet also engages amodal processing insensitive to specific modalities.

U2 - 10.1016/j.bandl.2018.06.008

DO - 10.1016/j.bandl.2018.06.008

M3 - Article

C2 - 29986168

VL - 185

SP - 1

EP - 8

JO - Brain and Language

JF - Brain and Language

SN - 0093-934X

ER -