Multimodal Semantic Learning from Child-Directed Input

Angeliki Lazaridou, Grzegorz Chrupala, Raquel Fernández, Marco Baroni

    Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

    18 Citations (Scopus)

    Abstract

    Children learn the meaning of words by being exposed to perceptually rich situations (linguistic discourse, visual scenes, etc). Current computational learning models typically simulate these rich situations through impoverished symbolic approximations. In this work, we present a distributed word learning model that operates on child-directed speech paired with realistic visual scenes. The model integrates linguistic and extra-linguistic information (visual and social cues), handles referential uncertainty, and correctly learns to associate words with objects, even in cases of limited linguistic exposure.
    Original languageEnglish
    Title of host publicationProceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
    PublisherAssociation for Computational Linguistics
    Pages387-392
    ISBN (Electronic)9781941643914
    Publication statusPublished - Jun 2016
    EventNorth American Chapter of the Association for Computational Linguistics: Human Language Technologies - San Diego, United States
    Duration: 12 Jun 201617 Jun 2016
    Conference number: 15
    http://naacl.org/naacl-hlt-2016/index.html

    Conference

    ConferenceNorth American Chapter of the Association for Computational Linguistics: Human Language Technologies
    Abbreviated titleNAACL HLT 2016
    Country/TerritoryUnited States
    CitySan Diego
    Period12/06/1617/06/16
    Internet address

    Fingerprint

    Dive into the research topics of 'Multimodal Semantic Learning from Child-Directed Input'. Together they form a unique fingerprint.

    Cite this