Selecting Backtranslated Data from Multiple Sources for Improved Neural Machine Translation

Xabier Soto, Dimitar Shterionov, Alberto Poncelas, Andy Way

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

Machine translation (MT) has benefited from using synthetic training data originating from translating monolingual corpora, a technique known as backtranslation. Combining backtranslated data from different sources has led to better results than when using such data in isolation. In this work we analyse the impact that data translated with rule-based, phrase-based statistical and neural MT systems has on new MT systems. We use a real-world low-resource use-case (Basque-to-Spanish in the clinical domain) as well as a high-resource language pair (German-to-English) to test different scenarios with backtranslation and employ data selection to optimise the synthetic corpora. We exploit different data selection strategies in order to reduce the amount of data used, while at the same time maintaining high-quality MT systems. We further tune the data selection method by taking into account the quality of the MT systems used for backtranslation and lexical diversity of the resulting corpora. Our experiments show that incorporating backtranslated data from different sources can be beneficial, and that availing of data selection can yield improved performance.
Original languageEnglish
Title of host publicationProceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Place of PublicationOnline
PublisherAssociation for Computational Linguistics
Pages3898-3908
Number of pages11
DOIs
Publication statusPublished - 1 Jul 2020
Externally publishedYes

Fingerprint

Dive into the research topics of 'Selecting Backtranslated Data from Multiple Sources for Improved Neural Machine Translation'. Together they form a unique fingerprint.

Cite this