From phonemes to images: levels of representation in a recurrent neural model of visually-grounded language learning

    Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

    20 Citations (Scopus)

    Abstract

    We present a model of visually-grounded language learning based on stacked gated recurrent neural networks which learns to predict visual features given an image description in the form of a sequence of phonemes. The learning task resembles that faced by human language learners who need to discover both structure and meaning from noisy and ambiguous data across modalities.
    We show that our model indeed learns to predict features of the visual context given phonetically transcribed image descriptions, and show that it represents linguistic information in a hierarchy of levels: lower layers in the stack are comparatively more sensitive to form, whereas higher layers are more sensitive to meaning.
    Original languageEnglish
    Title of host publicationProceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers
    PublisherInternational Committee on Computational Linguistics
    Pages1309-1319
    Number of pages10
    ISBN (Electronic)978-4-87974-702-0
    Publication statusPublished - 2016

    Fingerprint

    Dive into the research topics of 'From phonemes to images: levels of representation in a recurrent neural model of visually-grounded language learning'. Together they form a unique fingerprint.

    Cite this