Currently, there is little agreement as to how Natural Language Generation (NLG) systems should be evaluated, with a particularly high degree of variation in the way that human evaluation is carried out. This paper provides an overview of how (mostly intrinsic) human evaluation is currently conducted and presents a set of best practices, grounded in the literature. These best practices are also linked to the stages that researchers go through when conducting an evaluation research (planning stage; execution and release stage), and the specific steps in these stages. With this paper, we hope to contribute to the quality and consistency of human evaluations in NLG.
|Number of pages||24|
|Journal||Computer Speech and Language: An official publication of the International Speech Communication Association (ISCA)|
|Publication status||Published - 21 May 2021|
- Natural Language Generation
- Human evaluation
- Literature review
- Open science