Representations of language in a model of visually grounded speech signal

    Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

    Abstract

    We present a visually grounded model of speech perception which projects spoken utterances and images to a joint semantic space. We use a multi-layer recurrent highway network to model the temporal nature of spoken speech, and show that it learns to extract both form and meaning-based linguistic knowledge from the input signal. We carry out an in-depth analysis of the representations used by different components of the trained model and show that encoding of semantic aspects tends to become richer as we go up the hierarchy of layers, whereas encoding of form-related aspects of the language input tends to initially increase and then plateau or decrease.
    Original languageEnglish
    Title of host publicationProceedings of the 55th of the Annual Meeting of the Association for Computational Linguistics
    PublisherAssociation for Computational Linguistics
    Pages613–622
    DOIs
    Publication statusPublished - 2017
    EventAnnual Meeting of the Association for Computational Linguistics 2017 - Vancouver, Canada
    Duration: 30 Jul 20174 Aug 2017
    Conference number: 55
    http://acl2017.org/

    Conference

    ConferenceAnnual Meeting of the Association for Computational Linguistics 2017
    Abbreviated titleACL 2017
    CountryCanada
    CityVancouver
    Period30/07/174/08/17
    Internet address

    Fingerprint

    Semantics
    Linguistics

    Cite this

    Chrupala, G., Gelderloos, L., & Alishahi, A. (2017). Representations of language in a model of visually grounded speech signal. In Proceedings of the 55th of the Annual Meeting of the Association for Computational Linguistics (pp. 613–622 ). Association for Computational Linguistics. https://doi.org/10.18653/v1/P17-1057
    Chrupala, Grzegorz ; Gelderloos, Lieke ; Alishahi, Afra. / Representations of language in a model of visually grounded speech signal. Proceedings of the 55th of the Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2017. pp. 613–622
    @inproceedings{3116595ce8f84d60aa780a172ed07654,
    title = "Representations of language in a model of visually grounded speech signal",
    abstract = "We present a visually grounded model of speech perception which projects spoken utterances and images to a joint semantic space. We use a multi-layer recurrent highway network to model the temporal nature of spoken speech, and show that it learns to extract both form and meaning-based linguistic knowledge from the input signal. We carry out an in-depth analysis of the representations used by different components of the trained model and show that encoding of semantic aspects tends to become richer as we go up the hierarchy of layers, whereas encoding of form-related aspects of the language input tends to initially increase and then plateau or decrease.",
    author = "Grzegorz Chrupala and Lieke Gelderloos and Afra Alishahi",
    year = "2017",
    doi = "10.18653/v1/P17-1057",
    language = "English",
    pages = "613–622",
    booktitle = "Proceedings of the 55th of the Annual Meeting of the Association for Computational Linguistics",
    publisher = "Association for Computational Linguistics",

    }

    Chrupala, G, Gelderloos, L & Alishahi, A 2017, Representations of language in a model of visually grounded speech signal. in Proceedings of the 55th of the Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pp. 613–622 , Annual Meeting of the Association for Computational Linguistics 2017, Vancouver, Canada, 30/07/17. https://doi.org/10.18653/v1/P17-1057

    Representations of language in a model of visually grounded speech signal. / Chrupala, Grzegorz; Gelderloos, Lieke; Alishahi, Afra.

    Proceedings of the 55th of the Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2017. p. 613–622 .

    Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

    TY - GEN

    T1 - Representations of language in a model of visually grounded speech signal

    AU - Chrupala, Grzegorz

    AU - Gelderloos, Lieke

    AU - Alishahi, Afra

    PY - 2017

    Y1 - 2017

    N2 - We present a visually grounded model of speech perception which projects spoken utterances and images to a joint semantic space. We use a multi-layer recurrent highway network to model the temporal nature of spoken speech, and show that it learns to extract both form and meaning-based linguistic knowledge from the input signal. We carry out an in-depth analysis of the representations used by different components of the trained model and show that encoding of semantic aspects tends to become richer as we go up the hierarchy of layers, whereas encoding of form-related aspects of the language input tends to initially increase and then plateau or decrease.

    AB - We present a visually grounded model of speech perception which projects spoken utterances and images to a joint semantic space. We use a multi-layer recurrent highway network to model the temporal nature of spoken speech, and show that it learns to extract both form and meaning-based linguistic knowledge from the input signal. We carry out an in-depth analysis of the representations used by different components of the trained model and show that encoding of semantic aspects tends to become richer as we go up the hierarchy of layers, whereas encoding of form-related aspects of the language input tends to initially increase and then plateau or decrease.

    U2 - 10.18653/v1/P17-1057

    DO - 10.18653/v1/P17-1057

    M3 - Conference contribution

    SP - 613

    EP - 622

    BT - Proceedings of the 55th of the Annual Meeting of the Association for Computational Linguistics

    PB - Association for Computational Linguistics

    ER -

    Chrupala G, Gelderloos L, Alishahi A. Representations of language in a model of visually grounded speech signal. In Proceedings of the 55th of the Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. 2017. p. 613–622 https://doi.org/10.18653/v1/P17-1057