Abstract
Approaches to Quality Estimation (QE) of machine translation have shown promising results at predicting quality scores for translated sentences. However, QE models are often trained on noisy approximations of quality annotations derived from the proportion of post-edited words in translated sentences instead of direct human annotations of translation errors. The latter is a more reliable ground-truth but more expensive to obtain. In this paper, we present the first attempt to model the task of predicting the proportion of actual translation errors in a sentence while minimising the need for direct human annotation. For that purpose, we use transfer-learning to leverage large scale noisy annotations and small sets of high-fidelity human annotated translation errors to train QE models. Experiments on four language pairs and translations obtained by statistical and neural models show consistent gains over strong baselines.
Original language | English |
---|---|
Title of host publication | Proceedings of the 22nd Annual Conference of the European Association for Machine Translation |
Place of Publication | Lisboa, Portugal |
Publisher | European Association for Machine Translation |
Pages | 145-153 |
Number of pages | 9 |
Publication status | Published - 1 Nov 2020 |
Externally published | Yes |