Crossmodal binding of fear in voice and face

R.J. Dolan, J.S. Morris, B. de Gelder

Research output: Contribution to journalArticleScientificpeer-review

214 Citations (Scopus)
44 Downloads (Pure)

Abstract

In social environments, multiple sensory channels are simultaneously engaged in the service of communication. In this experiment, we were concerned with defining the neuronal mechanisms for a perceptual bias in processing simultaneously presented emotional voices and faces. Specifically, we were interested in how bimodal presentation of a fearful voice facilitates recognition of fearful facial expression. By using event-related functional MRI, that crossed sensory modality (visual or auditory) with emotional expression (fearful or happy), we show that perceptual facilitation during face fear processing is expressed through modulation of neuronal responses in the amygdala and the fusiform cortex. These data suggest that the amygdala is important for emotional crossmodal sensory convergence with the associated perceptual bias during fear processing, being mediated by task-related modulation of face-processing regions of fusiform cortex.
Original languageEnglish
Pages (from-to)9465-10022
JournalProceedings of the National Academy of Sciences of the United States of America (PNAS)
Volume98
Issue number17
Publication statusPublished - 2001

Fingerprint Dive into the research topics of 'Crossmodal binding of fear in voice and face'. Together they form a unique fingerprint.

  • Cite this