The Multimodal Annotation Software Tool (MAST)

Bruno De Lemos Ribeiro Pinto Cardoso, Neil Cohn

Research output: Contribution to conferencePaperScientificpeer-review

Abstract

Multimodal combinations of writing and pictures have become ubiquitous in contemporary society, and scholars have increasingly been turning to analyzing these media. Here we present a resource for annotating these complex documents: the Multimodal Annotation Software Tool (MAST). MAST is an application that allows users to analyze visual and multimodal documents by selecting and annotating visual regions, and to establish relations between annotations that create dependencies and/or constituent structures. By means of schema publications, MAST allows annotation theories to be citable, while evolving and being shared. Documents can be annotated using multiple schemas simultaneously, offering more comprehensive perspectives. As a distributed, client-server system MAST allows for collaborative annotations across teams of users, and features team management and resource access functionalities, facilitating the potential for implementing open science practices. Altogether, we aim for MAST to provide a powerful and innovative annotation tool with application across numerous fields engaging with multimodal media.
Original languageEnglish
Pages6822‑6828
Publication statusPublished - 2022
EventLanguage Resources and Evaluation Conference - Palais du Pharo, Marseille, France
Duration: 21 Jun 202125 Jul 2021
Conference number: 13
https://lrec2022.lrec-conf.org/en/

Conference

ConferenceLanguage Resources and Evaluation Conference
Abbreviated titleLREC 2022
Country/TerritoryFrance
CityMarseille
Period21/06/2125/07/21
Internet address

Keywords

  • Multimodality
  • Annotation
  • Annotation software

Fingerprint

Dive into the research topics of 'The Multimodal Annotation Software Tool (MAST)'. Together they form a unique fingerprint.

Cite this