Text segmentation with character-level text embeddings

    Research output: Contribution to conferencePaperOther research output

    23 Downloads (Pure)


    Learning word representations has recently seen much success in computational linguistics. However, assuming sequences of word tokens as input to linguistic analysis is often unjustified. For many languages word segmentation is a non-trivial task and naturally occurring text is sometimes a mixture of natural language strings and other character data. We propose to learn text representations directly from raw character sequences by training a Simple recurrent Network to predict the next character in text. The network uses its hidden layer to evolve abstract representations of the character sequences it sees. To demonstrate the usefulness of the learned text embeddings, we use them as features in a supervised character level text segmentation and labeling task: recognizing spans of text containing programming language code. By using the embeddings as features we are able to substantially improve over a baseline which uses only surface character n-grams.
    Original languageEnglish
    Publication statusPublished - 18 Sept 2013
    EventWorkshop on Deep Learning for Audio, Speech and Language Processing, ICML 2013 - Atlanta, United States
    Duration: 16 Jun 2013 → …


    WorkshopWorkshop on Deep Learning for Audio, Speech and Language Processing, ICML 2013
    Country/TerritoryUnited States
    Period16/06/13 → …


    Dive into the research topics of 'Text segmentation with character-level text embeddings'. Together they form a unique fingerprint.

    Cite this