Zooming in on the cognitive neuroscience of visual narrative

Neil Cohn*, Tom Foulsham

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

Abstract

Visual narratives like comics and films often shift between showing full scenes and close, zoomed-in viewpoints. These zooms are similar to the "spotlight of attention" cast across a visual scene in perception. We here measured ERPs to visual narratives (comic strips) that used zoomed-in and full-scene panels either throughout the whole sequence context or at specific critical panels. Zoomed-in panels were automatically generated on the basis of fixations from prior participants' eye movements to the crucial content of panels (Foulsham & Cohn, 2020). We found that these fixation panels evoked a smaller N300 than full-scenes, indicative of reduced cost for object identification, but that they also evoked a slightly larger amplitude N400 response, suggesting a greater cost for accessing semantic memory with constrained content. Panels in sequences where fixation panels persisted across all positions of the sequence also evoked larger posterior P600s, implying that constrained views required more updating or revision processes throughout the sequence. Altogether, these findings suggest that constraining a visual scene to its crucial parts triggers various processes related not only to the density of its information but also to its integration into a sequential context.

Original languageEnglish
Article number105634
Number of pages13
JournalBrain and Cognition
Volume146
DOIs
Publication statusPublished - Dec 2020

Keywords

  • Visual language
  • N400
  • N300
  • P600
  • Comics
  • Film
  • EVENT-RELATED POTENTIALS
  • SEMANTIC INTEGRATION
  • LANGUAGE COMPREHENSION
  • SITUATION MODELS
  • NEURAL EVIDENCE
  • INFERENCE
  • TIME
  • PREDICTION

Cite this