Authors
Fernando Alva-Manchego, Lucia Specia, Sara Szoc, Tom Vanallemeersch & Heidi Depraetere
Abstract
In modern computer-aided translation workflows, Machine Translation (MT) systems are used
to produce a draft that is then checked and edited where needed by human translators. In this
scenario, a Quality Estimation (QE) tool can be used to score MT outputs, and a threshold on
the QE scores can be applied to decide whether an MT output can be used as-is or requires hu-
man post-edition. While this could reduce cost and turnaround times, it could harm translation
quality, as QE models are not 100% accurate. In the framework of the APE-QUEST project
(Automated Post-Editing and Quality Estimation), we set up a case-study on the trade-off between speed, cost and quality, investigating the benefits of QE models in a real-world scenario,
where we rely on end-user acceptability as quality metric. Using data in the public adminis-
tration domain for English-Dutch and English-French, we experimented with two use cases:
assimilation and dissemination. Results shed some light on how QE scores can be explored to
establish thresholds that suit each use case and target language, and demonstrate the potential
benefits of adding QE to a translation workflow.
To read the full article please fill out the form below:
"*" indicates required fields