Induced lexical categories enhance cross-situational learning of word meanings

    Research output: Contribution to conferenceAbstractOther research output

    Abstract

    In this paper we bring together two sources of information that have been proposed as clues used by children acquiring word meanings. One mechanism is cross-situational learning which exploits co-occurrences between words and their referents in perceptual context accompanying utterances. The other mechanism is distributional semantics where meanings are based on word-word co-occurrences.

    We propose an integrated incremental model which learns lexical categories from linguistic input as well as word meanings from simulated cross-situational data. The co-occurrence statistics between the learned categories and the perceptual context enable the cross-situational word learning mechanism to form generalizations across word forms.

    Through a number of experiments we show that our automatically and incrementally induced categories significantly improve the performance of the word learning model, and are closely comparable to a set of gold-standard, manually-annotated part of speech tags. We perform further analyses to examine the impact of various factors, such as word frequency and class granularity, on the performance of the hybrid model of word and category learning.

    Furthermore, we simulate guessing the most probable semantic features for a novel word from its sentential context in the absence of perceptual cues, an ability which is beyond the reach of a pure cross-situational learner.
    Original languageEnglish
    Publication statusPublished - 2014
    Event24th Meeting of Computational Linguistics in The Netherlands (CLIN 2014) - Leiden, Netherlands
    Duration: 17 Jan 2014 → …

    Conference

    Conference24th Meeting of Computational Linguistics in The Netherlands (CLIN 2014)
    CountryNetherlands
    CityLeiden
    Period17/01/14 → …

    Fingerprint

    Semantics
    Linguistics
    Gold
    Statistics
    Experiments

    Cite this

    Alishahi, A., & Chrupala, G. (2014). Induced lexical categories enhance cross-situational learning of word meanings. Abstract from 24th Meeting of Computational Linguistics in The Netherlands (CLIN 2014), Leiden, Netherlands.
    Alishahi, A. ; Chrupala, Grzegorz. / Induced lexical categories enhance cross-situational learning of word meanings. Abstract from 24th Meeting of Computational Linguistics in The Netherlands (CLIN 2014), Leiden, Netherlands.
    @conference{c3cb7b8e73dd4309bbc92f1ab89fe7c5,
    title = "Induced lexical categories enhance cross-situational learning of word meanings",
    abstract = "In this paper we bring together two sources of information that have been proposed as clues used by children acquiring word meanings. One mechanism is cross-situational learning which exploits co-occurrences between words and their referents in perceptual context accompanying utterances. The other mechanism is distributional semantics where meanings are based on word-word co-occurrences.We propose an integrated incremental model which learns lexical categories from linguistic input as well as word meanings from simulated cross-situational data. The co-occurrence statistics between the learned categories and the perceptual context enable the cross-situational word learning mechanism to form generalizations across word forms.Through a number of experiments we show that our automatically and incrementally induced categories significantly improve the performance of the word learning model, and are closely comparable to a set of gold-standard, manually-annotated part of speech tags. We perform further analyses to examine the impact of various factors, such as word frequency and class granularity, on the performance of the hybrid model of word and category learning.Furthermore, we simulate guessing the most probable semantic features for a novel word from its sentential context in the absence of perceptual cues, an ability which is beyond the reach of a pure cross-situational learner.",
    author = "A. Alishahi and Grzegorz Chrupala",
    year = "2014",
    language = "English",
    note = "24th Meeting of Computational Linguistics in The Netherlands (CLIN 2014) ; Conference date: 17-01-2014",

    }

    Alishahi, A & Chrupala, G 2014, 'Induced lexical categories enhance cross-situational learning of word meanings' 24th Meeting of Computational Linguistics in The Netherlands (CLIN 2014), Leiden, Netherlands, 17/01/14, .

    Induced lexical categories enhance cross-situational learning of word meanings. / Alishahi, A.; Chrupala, Grzegorz.

    2014. Abstract from 24th Meeting of Computational Linguistics in The Netherlands (CLIN 2014), Leiden, Netherlands.

    Research output: Contribution to conferenceAbstractOther research output

    TY - CONF

    T1 - Induced lexical categories enhance cross-situational learning of word meanings

    AU - Alishahi, A.

    AU - Chrupala, Grzegorz

    PY - 2014

    Y1 - 2014

    N2 - In this paper we bring together two sources of information that have been proposed as clues used by children acquiring word meanings. One mechanism is cross-situational learning which exploits co-occurrences between words and their referents in perceptual context accompanying utterances. The other mechanism is distributional semantics where meanings are based on word-word co-occurrences.We propose an integrated incremental model which learns lexical categories from linguistic input as well as word meanings from simulated cross-situational data. The co-occurrence statistics between the learned categories and the perceptual context enable the cross-situational word learning mechanism to form generalizations across word forms.Through a number of experiments we show that our automatically and incrementally induced categories significantly improve the performance of the word learning model, and are closely comparable to a set of gold-standard, manually-annotated part of speech tags. We perform further analyses to examine the impact of various factors, such as word frequency and class granularity, on the performance of the hybrid model of word and category learning.Furthermore, we simulate guessing the most probable semantic features for a novel word from its sentential context in the absence of perceptual cues, an ability which is beyond the reach of a pure cross-situational learner.

    AB - In this paper we bring together two sources of information that have been proposed as clues used by children acquiring word meanings. One mechanism is cross-situational learning which exploits co-occurrences between words and their referents in perceptual context accompanying utterances. The other mechanism is distributional semantics where meanings are based on word-word co-occurrences.We propose an integrated incremental model which learns lexical categories from linguistic input as well as word meanings from simulated cross-situational data. The co-occurrence statistics between the learned categories and the perceptual context enable the cross-situational word learning mechanism to form generalizations across word forms.Through a number of experiments we show that our automatically and incrementally induced categories significantly improve the performance of the word learning model, and are closely comparable to a set of gold-standard, manually-annotated part of speech tags. We perform further analyses to examine the impact of various factors, such as word frequency and class granularity, on the performance of the hybrid model of word and category learning.Furthermore, we simulate guessing the most probable semantic features for a novel word from its sentential context in the absence of perceptual cues, an ability which is beyond the reach of a pure cross-situational learner.

    M3 - Abstract

    ER -

    Alishahi A, Chrupala G. Induced lexical categories enhance cross-situational learning of word meanings. 2014. Abstract from 24th Meeting of Computational Linguistics in The Netherlands (CLIN 2014), Leiden, Netherlands.