Best practices for the human evaluation of automatically generated text

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

Currently, there is little agreement as to how Natural Language Generation (NLG) systems should be evaluated. While there is some agreement regarding automatic metrics, there is a high degree of variation in the way that human evaluation is carried out. This paper provides an overview of how human evaluation is currently conducted, and presents a set of best practices, grounded in the literature. With this paper, we hope to contribute to the quality and consistency of human evaluations in NLG.
Original languageEnglish
Title of host publicationProceedings of the 12th International Conference on Natural Language Generation
Place of PublicationTokyo, Japan
PublisherAssociation for Computational Linguistics
Pages355-368
Number of pages14
Publication statusPublished - 1 Oct 2019
Event12th International conference on Natural Language Generation (INLG 2019) - Tokyo, Japan
Duration: 29 Oct 20191 Nov 2019
https://www.inlg2019.com

Conference

Conference12th International conference on Natural Language Generation (INLG 2019)
CountryJapan
CityTokyo
Period29/10/191/11/19
Internet address

Cite this

van der Lee, C., Gatt, A., van Miltenburg, E., Wubben, S., & Krahmer, E. (2019). Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th International Conference on Natural Language Generation (pp. 355-368). Association for Computational Linguistics. https://www.aclweb.org/anthology/W19-8643