Normalizing tweets with edit scripts and recurrent neural embeddings

    Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

    62 Citations (Scopus)

    Abstract

    Tweets often contain a large proportion of abbreviations, alternative spellings, novel words and other non-canonical language. These features are problematic for standard language analysis tools and it can be desirable to convert them to canonical form. We propose a novel text normalization model based on learning edit operations from labeled data while incorporating features induced from unlabeled data via character-level neural text embeddings. The text embeddings are generated using an Simple Recurrent Network. We find that enriching the feature set with text embeddings substantially lowers word error rates on an English tweet normalization dataset. Our model improves on state-of-the-art with little training data and without any lexical resources.
    Original languageEnglish
    Title of host publicationProceedings of the 52nd Annual Meeting of the Association for Computational Linguistics
    EditorsKristina Toutanova, Hua Wu
    Place of PublicationBaltimore, Maryland
    PublisherAssociation for Computational Linguistics (ACL)
    Pages680-686
    Volume2
    EditionShort Papers
    ISBN (Electronic)978-1-937284-73-2
    Publication statusPublished - 2014
    EventThe 52nd Annual Meeting of the Association for Computational Linguistics - Baltimore, United States
    Duration: 22 Jun 201427 Jun 2014

    Conference

    ConferenceThe 52nd Annual Meeting of the Association for Computational Linguistics
    Country/TerritoryUnited States
    CityBaltimore
    Period22/06/1427/06/14

    Fingerprint

    Dive into the research topics of 'Normalizing tweets with edit scripts and recurrent neural embeddings'. Together they form a unique fingerprint.

    Cite this