The most popular way of finding a translation for a source sentence with a neural sequence-to-sequence model is a simple beam search. The target sentence is predicted one word at a time and after each prediction, a fixed number of possibilities (typically between 4 and 10) is retained for further exploration. This strategy can be suboptimal as these local hard decisions do not take the remainder of the translation into account and can not be reverted later on.
Localization processes have been, for the most part, unchanged for the last two decades. Companies have relied on expensive in-house human translators or have gone the route of utilizing machine translation. But the question that many localization leaders are starting to ask themselves is: is there another way to localize?