Visually grounded models of spoken language - A survey of datasets, architectures and evaluation techniques.

Research output: Contribution to journalArticleScientificpeer-review

16 Citations (Scopus)

Abstract

This survey provides an overview of the evolution of visually grounded models of spoken language over the last 20 years. Such models are inspired by the observation that when children pick up a language, they rely on a wide range of indirect and noisy clues, crucially including signals from the visual modality co-occurring with spoken utterances. Several fields have made important contributions to this approach to modeling or mimicking the process of learning language: Machine Learning, Natural Language and Speech Processing, Computer Vision and Cognitive Science. The current paper brings together these contributions in order to provide a useful introduction and overview for practitioners in all these areas. We discuss the central research questions addressed, the timeline of developments, and the datasets which enabled much of this work. We then summarize the main modeling architectures and offer an exhaustive overview of the evaluation metrics and analysis techniques.

Original languageEnglish
Pages (from-to)673-707
Number of pages35
JournalJournal of Artificial Intelligence Research
Volume73
DOIs
Publication statusPublished - Feb 2022

Fingerprint

Dive into the research topics of 'Visually grounded models of spoken language - A survey of datasets, architectures and evaluation techniques.'. Together they form a unique fingerprint.

Cite this