Description
How do we interpret a lightbulb above a head in visual images to mean inspiration? We investigated the semantic processing of these “upfixes” like lightbulbs or gears that float above characters’ heads. We examined the congruity of face-upfix dyads presented sequentially with words describing their literal (“lightbulb”) or non-literal meanings (“inspiration”). To examine if upfixes alone sponsor meanings, upfixes either matched or mismatched the facial expression (ex. lightbulb over an excited vs. sad face). Literal words always evoked faster response times for face-upfix dyads when presented before the images. When images appeared before words, participants responded faster to non-literal words for matching dyads than mismatching dyads. On the other hand, when literal words appeared before images, participants responded faster to matching dyads than mismatching dyads. Non-literal words were rated as more congruous with matching dyads, while literal words were more congruous with mismatching dyads. Thus, non-literal upfix meanings (e.g., inspiration) are ingrained in memory only when they match facial expressions, supporting that they belong to a constrained visual lexicon. Our study contributes a combinatorial method of both verbal and visual modalities into the study of non-literal expressions in memory.
Date made available | 22 Nov 2023 |
---|---|
Publisher | DataverseNL |