Abstract
Multimodal combinations of writing and pictures have become ubiquitous in contemporary society, and scholars have increasingly been turning to analyzing these media. Here we present a resource for annotating these complex documents: the Multimodal Annotation Software Tool (MAST). MAST is an application that allows users to analyze visual and multimodal documents by selecting and annotating visual regions, and to establish relations between annotations that create dependencies and/or constituent structures. By means of schema publications, MAST allows annotation theories to be citable, while evolving and being shared. Documents can be annotated using multiple schemas simultaneously, offering more comprehensive perspectives. As a distributed, client-server system MAST allows for collaborative annotations across teams of users, and features team management and resource access functionalities, facilitating the potential for implementing open science practices. Altogether, we aim for MAST to provide a powerful and innovative annotation tool with application across numerous fields engaging with multimodal media.
Original language | English |
---|---|
Pages | 6822‑6828 |
Publication status | Published - 2022 |
Event | Language Resources and Evaluation Conference - Palais du Pharo, Marseille, France Duration: 21 Jun 2021 → 25 Jul 2021 Conference number: 13 https://lrec2022.lrec-conf.org/en/ |
Conference
Conference | Language Resources and Evaluation Conference |
---|---|
Abbreviated title | LREC 2022 |
Country/Territory | France |
City | Marseille |
Period | 21/06/21 → 25/07/21 |
Internet address |
Keywords
- Multimodality
- Annotation
- Annotation software