Neural MT systems generate translations one word at a time. They can still generate fluid translations because they choose each word based on all of the words generated so far. Typically, these systems are just trained to generate the next word correctly, based on all previous words. One systematic problem with this word-by-word approach to training and translating is that the translations are often too short and omit important content.
In the paper Neural Machine Translation with Reconstruction, the authors describe a clever new way to train and translate. During training, their system is encouraged not only to generate each next word correctly but also to correctly generate the original source sentence based on the translation that was generated. In this way, the model is rewarded for generating a translation that is sufficient to describe all of the content in the original source.
When translating, their system generates multiple alternatives and chooses the final translation such that two conditions are met simultaneously: the translation is predictable given the source, and the source is predictable given the translation. Ensuring this two-way coherence provides a substantial benefit in translation quality. In their experiments, translation lengths were more similar to human reference lengths, and the system was less prone to generating too much or too little content compared to a strong neural MT baseline.
A more technical summary of this paper when it was originally released was provided by Denny Britz, and the first author Zhaopeng Tu responded to those comments.
Paper: Neural Machine Translation with Reconstruction
Authors: Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, and Hang Li
Venue: Association for the Advancement of Artificial Intelligence (AAAI) 2017