Abstract
This study explored two factors that might have an impact on how participants perceive distance between objects in a visual scene: perceptual grouping and presentation mode (2D versus 3D). More specifically, we examined how these factors affect language production, asking if they cause speakers to include a redundant color attribute in their descriptions of objects. We expected speakers to use more redundant color attributes when distractor objects are perceptually close. Our findings revealed effects of perceptual grouping, with speakers indeed using color more often when all objects in a scene were in the same perceptual group as compared to when this was not the case. An effect of presentation mode (whether scenes were presented in 2D or in 3D) was only partially borne out by the data. Implications of our results for computational models of reference production are discussed.
Original language | English |
---|---|
Title of host publication | CogSci 2014 |
Subtitle of host publication | Cognitive Science Meets Artificial Intelligence: Human and Artificial Agents in Interactive Contexts |
Editors | Paul Bello, Marcello Guarini, Marjorie McShane, Brian Scassellati |
Pages | 2507-2512 |
Number of pages | 6 |
Publication status | Published - 2014 |
Event | CogSci 2014 - Québec City, Canada Duration: 23 Jul 2014 → 26 Jul 2014 |
Conference
Conference | CogSci 2014 |
---|---|
Country/Territory | Canada |
City | Québec City |
Period | 23/07/14 → 26/07/14 |
Keywords
- Reference Production
- Overspecification
- 2D and 3D scene processing
- Perceptual Grouping
- Artificial Agents