Abstract
Human assessment remains the most trusted form of evaluation in NLG, but highly diverse approaches and a proliferation of different quality criteria used by researchers make it difficult to compare results and draw conclusions across papers, with adverse implications for meta-evaluation and reproducibility. In this paper, we present (i) our dataset of 165 NLG papers with human evaluations, (ii) the annotation scheme we developed to label the papers for different aspects of evaluations, (iii) quantitative analyses of the annotations, and (iv) a set of recommendations for improving standards in evaluation reporting. We use the annotations as a basis for examining information included in evaluation reports, and levels of consistency in approaches, experimental design and terminology, focusing in particular on the 200+ different terms that have been used for evaluated aspects of quality. We conclude that due to a pervasive lack of clarity in reports and extreme diversity in approaches, human evaluation in NLG presents as extremely confused in 2020, and that the field is in urgent need of standard methods and terminology.
Original language | English |
---|---|
Title of host publication | Proceedings of the 13th International Conference on Natural Language Generation |
Place of Publication | Dublin, Ireland |
Publisher | Association for Computational Linguistics |
Pages | 169-182 |
Number of pages | 14 |
Publication status | Published - 1 Dec 2020 |
Event | International Conference on Natural Language Generation - online, Dublin , Ireland Duration: 15 Dec 2020 → 18 Dec 2020 Conference number: 13 https://www.inlg2020.org/ |
Conference
Conference | International Conference on Natural Language Generation |
---|---|
Abbreviated title | INLG 2020 |
Country/Territory | Ireland |
City | Dublin |
Period | 15/12/20 → 18/12/20 |
Internet address |