Quality In, Quality Out: Learning from Actual Mistakes

Frederic Blain*, Nikolaos Aletras, Lucia Specia

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

Approaches to Quality Estimation (QE) of machine translation have shown promising results at predicting quality scores for translated sentences. However, QE models are often trained on noisy approximations of quality annotations derived from the proportion of post-edited words in translated sentences instead of direct human annotations of translation errors. The latter is a more reliable ground-truth but more expensive to obtain. In this paper, we present the first attempt to model the task of predicting the proportion of actual translation errors in a sentence while minimising the need for direct human annotation. For that purpose, we use transfer-learning to leverage large scale noisy annotations and small sets of high-fidelity human annotated translation errors to train QE models. Experiments on four language pairs and translations obtained by statistical and neural models show consistent gains over strong baselines.
Original languageEnglish
Title of host publicationProceedings of the 22nd Annual Conference of the European Association for Machine Translation
Place of PublicationLisboa, Portugal
PublisherEuropean Association for Machine Translation
Pages145-153
Number of pages9
Publication statusPublished - 1 Nov 2020
Externally publishedYes

Fingerprint

Dive into the research topics of 'Quality In, Quality Out: Learning from Actual Mistakes'. Together they form a unique fingerprint.

Cite this