The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics

Sebastian Gehrmann, Tosin Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, Anuoluwapo Aremu, Antoine Bosselut, Khyathi Raghavi Chandu, Miruna-Adriana Clinciu, Dipanjan Das, Kaustubh Dhole, Wanyu Du, Esin Durmus, Ondřej Dušek, Chris Chinenye Emezue, Varun Gangal, Cristina Garbacea, Tatsunori Hashimoto, Yufang Hou, Yacine Jernite, Harsh JhamtaniYangfeng Ji, Shailza Jolly, Mihir Kale, Dhruv Kumar, Faisal Ladhak, Aman Madaan, Mounica Maddela, Khyati Mahajan, Saad Mahamood, Bodhisattwa Prasad Majumder, Pedro Henrique Martins, Angelina McMillan-Major, Simon Mille, Emiel van Miltenburg, Moin Nadeem, Shashi Narayan, Vitaly Nikolaev, Andre Niyongabo Rubungo, Salomey Osei, Ankur Parikh, Laura Perez-Beltrachini, Niranjan Ramesh Rao, Vikas Raunak, Juan Diego Rodriguez, Sashank Santhanam, João Sedoc, Thibault Sellam, Samira Shaikh, Anastasia Shimorina, Marco Antonio Sobrevilla Cabezudo, Hendrik Strobelt, Nishant Subramani, Wei Xu, Diyi Yang, Akhila Yerukola, Jiawei Zhou

    Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

    Abstract

    We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metrics. Measuring progress in NLG relies on a constantly evolving ecosystem of automated metrics, datasets, and human evaluation standards. Due to this moving target, new models often still evaluate on divergent anglo-centric corpora with well-established, but flawed, metrics. This disconnect makes it challenging to identify the limitations of current models and opportunities for progress. Addressing this limitation, GEM provides an environment in which models can easily be applied to a wide set of tasks and in which evaluation strategies can be tested. Regular updates to the benchmark will help NLG research become more multilingual and evolve the challenge alongside models. This paper serves as the description of the data for the 2021 shared task at the associated GEM Workshop.
    Original languageEnglish
    Title of host publicationProceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021)
    Place of PublicationOnline
    PublisherAssociation for Computational Linguistics
    Pages96-120
    Number of pages25
    Publication statusPublished - 1 Aug 2021
    EventWorkshop on Natural Language Generation, Evaluation, and Metrics - Berkeley Hotel, Thailand
    Duration: 5 Aug 20216 Aug 2021
    Conference number: 1
    https://www.aclweb.org/portal/content/first-workshop-generation-evaluation-and-metrics-acl-2021

    Conference

    ConferenceWorkshop on Natural Language Generation, Evaluation, and Metrics
    Abbreviated titleGEM2021
    Country/TerritoryThailand
    Period5/08/216/08/21
    Internet address

    Fingerprint

    Dive into the research topics of 'The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics'. Together they form a unique fingerprint.

    Cite this