Best practices for the human evaluation of automatically generated text

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

Currently, there is little agreement as to how Natural Language Generation (NLG) systems should be evaluated. While there is some agreement regarding automatic metrics, there is a high degree of variation in the way that human evaluation is carried out. This paper provides an overview of how human evaluation is currently conducted, and presents a set of best practices, grounded in the literature. With this paper, we hope to contribute to the quality and consistency of human evaluations in NLG.
Original languageEnglish
Title of host publicationProceedings of the 12th International Conference on Natural Language Generation
Place of PublicationTokyo, Japan
PublisherAssociation for Computational Linguistics
Pages355-368
Number of pages14
Publication statusPublished - 1 Oct 2019
Event12th International conference on Natural Language Generation (INLG 2019) - Tokyo, Japan
Duration: 29 Oct 20191 Nov 2019
https://www.inlg2019.com

Conference

Conference12th International conference on Natural Language Generation (INLG 2019)
Country/TerritoryJapan
CityTokyo
Period29/10/191/11/19
Internet address

Fingerprint

Dive into the research topics of 'Best practices for the human evaluation of automatically generated text'. Together they form a unique fingerprint.

Cite this