The bulk of research in the area of speech processing concerns itself with supervised approaches to transcribing spoken language into text. In the domain of unsupervised learning most work on speech has focused on discovering relatively low level constructs such as phoneme inventories or word-like units. This is in contrast to research on written language, where there is a large body of work on unsupervised induction of semantic representations of words and whole sentences and longer texts. In this study we examine the challenges of adapting these approaches from written to spoken language. We conjecture that unsupervised learning of spoken language semantics becomes possible if we abstract from the surface variability. We simulate this setting by using a dataset of utterances spoken by a realistic but uniform synthetic voice. We evaluate two simple unsupervised models which, to varying degrees of success, learn semantic representations of speech fragments. Finally we suggest possible routes toward transferring our methods to the domain of unrestricted natural speech.
|Title of host publication||Proceedings of the Society for Computation in Linguistics|
|Publication status||Published - 2019|
|Event||Society for Computation in Linguistics - New York City, United States|
Duration: 3 Jan 2019 → …
|Conference||Society for Computation in Linguistics|
|City||New York City|
|Period||3/01/19 → …|