Abstract
We present a visually grounded model of speech perception which projects spoken utterances and images to a joint semantic space. We use a multi-layer recurrent highway network to model the temporal nature of spoken speech, and show that it learns to extract both form and meaning-based linguistic knowledge from the input signal. We carry out an in-depth analysis of the representations used by different components of the trained model and show that encoding of semantic aspects tends to become richer as we go up the hierarchy of layers, whereas encoding of form-related aspects of the language input tends to initially increase and then plateau or decrease.
Original language | English |
---|---|
Title of host publication | Proceedings of the 55th of the Annual Meeting of the Association for Computational Linguistics |
Publisher | Association for Computational Linguistics |
Pages | 613–622 |
DOIs | |
Publication status | Published - 2017 |
Event | Annual Meeting of the Association for Computational Linguistics 2017 - Vancouver, Canada Duration: 30 Jul 2017 → 4 Aug 2017 Conference number: 55 http://acl2017.org/ |
Conference
Conference | Annual Meeting of the Association for Computational Linguistics 2017 |
---|---|
Abbreviated title | ACL 2017 |
Country/Territory | Canada |
City | Vancouver |
Period | 30/07/17 → 4/08/17 |
Internet address |