Best practices for the human evaluation of automatically generated text

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review


Currently, there is little agreement as to how Natural Language Generation (NLG) systems should be evaluated. While there is some agreement regarding automatic metrics, there is a high degree of variation in the way that human evaluation is carried out. This paper provides an overview of how human evaluation is currently conducted, and presents a set of best practices, grounded in the literature. With this paper, we hope to contribute to the quality and consistency of human evaluations in NLG.
Original languageEnglish
Title of host publicationProceedings of the 12th International Conference on Natural Language Generation
Place of PublicationTokyo, Japan
PublisherAssociation for Computational Linguistics
Number of pages14
Publication statusPublished - 1 Oct 2019
Event12th International conference on Natural Language Generation (INLG 2019) - Tokyo, Japan
Duration: 29 Oct 20191 Nov 2019


Conference12th International conference on Natural Language Generation (INLG 2019)
Internet address

Cite this