Lilt Labs | Bridging Translation Research and Practice

Technology for Interactive MT

Written by Spence Green | July, 26, 2017

This article describes the technology behind Lilt’s interactive translation suggestions. The details were first published in an academic conference paper, Models and Inference for Prefix-Constrained Machine Translation.

Machine translation systems can translate whole sentences or documents, but they can also be used to finish translations that were started by a person — a form of autocomplete at the sentence level. In the computational linguistics literature, predicting the rest of a sentence is called prefix-constrainedmachine translation. The prefix of a sentence is the portion authored by a translator. A suffix is suggested by the machine to complete the translation. These suggestions are proposed interactively to translators after each word they type. Translators can accept all or part of the proposed suffix with a single keystroke, saving time by automating the most predictable parts of the translation process.

In cooperation with Stanford University’s Minh-Thang Luong, Lilt’s research department recently published several new scientific contributions to the field of prefix-constrained machine translation.at the 54th Annual Meeting of the Association for Computational Linguistics in Berlin. In addition to extending a neural machine translation model to perform prefix-constrained translation for the first time in the literature, the paper describes three improvements to the widely used statistical phrase-based paradigm: new ways of measuring suffix accuracy, new machine learning techniques, and new suggestion algorithms. The paper describes how each of these innovations improves the suggestion quality of an interactive translation system in large-scale English-German experiments. The methods described in the paper are used in all production systems deployed by Lilt.

In an interactive setting, the first words of the suggested suffix are critical; these words are the focus of the user’s attention when composing a translation. The system described in this paper is trained to be particularly sensitive to these first words. To achieve this effect, the system includes a new way of accounting for what parts of the sentence have already been translated, so that the suggestion of what the translator will type next is not redundant with existing content. The technical details include a novel beam search strategy and a hierarchical joint model of alignment and translation that together improve suggestions dramatically. For English-German news, next-word accuracy increases from 28.5% to 41.2%.

An interactive MT system could also display multiple suggestions to the user. We describe an algorithm for efficiently finding the n-best next words directly following a prefix and their corresponding best suffixes. Our experiments show that this approach to n-best list extraction, combined with our other improvements, increased next-word suggestion accuracy of 10-best lists from 33.4% to 55.5%. We also train a recurrent neural translation system for prefix-constrained translation. This neural system provides even more accurate predictions than our improved phrase-based system. However, inference is two orders of magnitude slower, which is problematic for an interactive setting. (Stay tuned for upcoming results about fast prefix-constrained neural translation.)

The paper concludes with a manual error analysis that reveals the strengths and weaknesses of both the phrase-based and neural approaches to prefix-constrained translation. Neural models are particularly good at producing grammatically correct and well-formed target language output. However, they also show a tendency to drop important content words.