Role of Context in Unsupervised Sentence Representation Learning: the Case of Dialog Act Modeling.

    Research output: Contribution to conferencePaperScientificpeer-review

    Abstract

    Unsupervised learning of word representations involves capturing the contextual information surrounding word occurrences, which can be grounded in the observation that word form is largely disconnected from word meaning. While there are fewer reasons to believe that the same holds for sentences, learning through context has been carried over to learning representations of word sequences. However, this work pays minimal to no attention to the role of context in inferring sentence representations. In this article, we present a dialog act tag probing task designed to explicitly compare content-, and context-oriented sentence representations inferred on utterances of telephone conversations (SwDA). Our results suggest that there is no clear benefit of context-based sentence representations over content-based sentence representations. However, there is a very clear benefit of increasing the dimensionality of the sentence vectors in nearly all approaches.
    Original languageEnglish
    Pages8784-8792
    Number of pages9
    DOIs
    Publication statusPublished - 2023
    EventThe 2023 Conference on Empirical Methods in Natural Language Processing - Singapore, Resort World Convention Centre, Singapore, Singapore
    Duration: 6 Dec 202310 Dec 2023
    https://2023.emnlp.org/

    Conference

    ConferenceThe 2023 Conference on Empirical Methods in Natural Language Processing
    Abbreviated titleEMNLP 2023
    Country/TerritorySingapore
    CitySingapore
    Period6/12/2310/12/23
    Internet address

    Fingerprint

    Dive into the research topics of 'Role of Context in Unsupervised Sentence Representation Learning: the Case of Dialog Act Modeling.'. Together they form a unique fingerprint.

    Cite this