On the difficulty of a distributional semantics of spoken language

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

The bulk of research in the area of speech processing concerns itself with supervised approaches to transcribing spoken language into text. In the domain of unsupervised learning most work on speech has focused on discovering relatively low level constructs such as phoneme inventories or word-like units. This is in contrast to research on written language, where there is a large body of work on unsupervised induction of semantic representations of words and whole sentences and longer texts. In this study we examine the challenges of adapting these approaches from written to spoken language. We conjecture that unsupervised learning of spoken language semantics becomes possible if we abstract from the surface variability. We simulate this setting by using a dataset of utterances spoken by a realistic but uniform synthetic voice. We evaluate two simple unsupervised models which, to varying degrees of success, learn semantic representations of speech fragments. Finally we suggest possible routes toward transferring our methods to the domain of unrestricted natural speech.
Original languageEnglish
Title of host publicationProceedings of the Society for Computation in Linguistics
Volume2
DOIs
Publication statusPublished - 2019
EventSociety for Computation in Linguistics - New York City, United States
Duration: 3 Jan 2019 → …
https://blogs.umass.edu/scil/scil-2019/

Conference

ConferenceSociety for Computation in Linguistics
CountryUnited States
CityNew York City
Period3/01/19 → …
Internet address

    Fingerprint

Keywords

  • cs.CL
  • cs.LG
  • cs.SD
  • eess.AS

Cite this