Best practices for the human evaluation of automatically generated text

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

Currently, there is little agreement as to how Natural Language Generation (NLG) systems should be evaluated. While there is some agreement regarding automatic metrics, there is a high degree of variation in the way that human evaluation is carried out. This paper provides an overview of how human evaluation is currently conducted, and presents a set of best practices, grounded in the literature. With this paper, we hope to contribute to the quality and consistency of human evaluations in NLG.
Original languageEnglish
Title of host publicationProceedings of the 12th International Conference on Natural Language Generation
Place of PublicationTokyo, Japan
PublisherAssociation for Computational Linguistics
Pages355-368
Number of pages14
Publication statusPublished - 1 Oct 2019
Event12th International conference on Natural Language Generation (INLG 2019) - Tokyo, Japan
Duration: 29 Oct 20191 Nov 2019
https://www.inlg2019.com

Conference

Conference12th International conference on Natural Language Generation (INLG 2019)
CountryJapan
CityTokyo
Period29/10/191/11/19
Internet address

Cite this

van der Lee, C., Gatt, A., van Miltenburg, E., Wubben, S., & Krahmer, E. (2019). Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th International Conference on Natural Language Generation (pp. 355-368). Tokyo, Japan: Association for Computational Linguistics.
van der Lee, Chris ; Gatt, Albert ; van Miltenburg, Emiel ; Wubben, Sander ; Krahmer, Emiel. / Best practices for the human evaluation of automatically generated text. Proceedings of the 12th International Conference on Natural Language Generation. Tokyo, Japan : Association for Computational Linguistics, 2019. pp. 355-368
@inproceedings{0c962280b7244ada878649fed4228c8a,
title = "Best practices for the human evaluation of automatically generated text",
abstract = "Currently, there is little agreement as to how Natural Language Generation (NLG) systems should be evaluated. While there is some agreement regarding automatic metrics, there is a high degree of variation in the way that human evaluation is carried out. This paper provides an overview of how human evaluation is currently conducted, and presents a set of best practices, grounded in the literature. With this paper, we hope to contribute to the quality and consistency of human evaluations in NLG.",
author = "{van der Lee}, Chris and Albert Gatt and {van Miltenburg}, Emiel and Sander Wubben and Emiel Krahmer",
year = "2019",
month = "10",
day = "1",
language = "English",
pages = "355--368",
booktitle = "Proceedings of the 12th International Conference on Natural Language Generation",
publisher = "Association for Computational Linguistics",

}

van der Lee, C, Gatt, A, van Miltenburg, E, Wubben, S & Krahmer, E 2019, Best practices for the human evaluation of automatically generated text. in Proceedings of the 12th International Conference on Natural Language Generation. Association for Computational Linguistics, Tokyo, Japan, pp. 355-368, 12th International conference on Natural Language Generation (INLG 2019), Tokyo, Japan, 29/10/19.

Best practices for the human evaluation of automatically generated text. / van der Lee, Chris; Gatt, Albert; van Miltenburg, Emiel; Wubben, Sander; Krahmer, Emiel.

Proceedings of the 12th International Conference on Natural Language Generation. Tokyo, Japan : Association for Computational Linguistics, 2019. p. 355-368.

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

TY - GEN

T1 - Best practices for the human evaluation of automatically generated text

AU - van der Lee, Chris

AU - Gatt, Albert

AU - van Miltenburg, Emiel

AU - Wubben, Sander

AU - Krahmer, Emiel

PY - 2019/10/1

Y1 - 2019/10/1

N2 - Currently, there is little agreement as to how Natural Language Generation (NLG) systems should be evaluated. While there is some agreement regarding automatic metrics, there is a high degree of variation in the way that human evaluation is carried out. This paper provides an overview of how human evaluation is currently conducted, and presents a set of best practices, grounded in the literature. With this paper, we hope to contribute to the quality and consistency of human evaluations in NLG.

AB - Currently, there is little agreement as to how Natural Language Generation (NLG) systems should be evaluated. While there is some agreement regarding automatic metrics, there is a high degree of variation in the way that human evaluation is carried out. This paper provides an overview of how human evaluation is currently conducted, and presents a set of best practices, grounded in the literature. With this paper, we hope to contribute to the quality and consistency of human evaluations in NLG.

M3 - Conference contribution

SP - 355

EP - 368

BT - Proceedings of the 12th International Conference on Natural Language Generation

PB - Association for Computational Linguistics

CY - Tokyo, Japan

ER -

van der Lee C, Gatt A, van Miltenburg E, Wubben S, Krahmer E. Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th International Conference on Natural Language Generation. Tokyo, Japan: Association for Computational Linguistics. 2019. p. 355-368