Textual supervision for visually grounded spoken language understanding

Bertrand Higy*, Desmond Elliott, Grzegorz Chrupala

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

Visually-grounded models of spoken language understanding extract semantic information directly from speech, without relying on transcriptions. This is useful for low-resource languages, where transcriptions can be expensive or impossible to obtain. Recent work showed that these models can be improved if transcriptions are available at training time. However, it is not clear how an end-to-end approach compares to a traditional pipeline-based approach when one has access to transcriptions. Comparing different strategies, we find that the pipeline approach works better when enough text is available. With low-resource languages in mind, we also show that translations can be effectively used in place of transcriptions but more data is needed to obtain similar results.
Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics: EMNLP 2020
Place of PublicationOnline
PublisherAssociation for Computational Linguistics
Pages2698-2709
Number of pages12
DOIs
Publication statusPublished - Nov 2020
Event2020 Conference on Empirical Methods in Natural Language Processing - Online
Duration: 16 Nov 202020 Nov 2020
https://2020.emnlp.org/

Conference

Conference2020 Conference on Empirical Methods in Natural Language Processing
Abbreviated titleEMNLP 2020
Period16/11/2020/11/20
Internet address

Fingerprint

Dive into the research topics of 'Textual supervision for visually grounded spoken language understanding'. Together they form a unique fingerprint.

Cite this