The most popular way of finding a translation for a source sentence with a neural sequence-to-sequence model is a simple beam search. The target sentence is predicted one word at a time and after each prediction, a fixed number of possibilities (typically between 4 and 10) is retained for further exploration. This strategy can be suboptimal as these local hard decisions do not take the remainder of the translation into account and can not be reverted later on.
Neural Machine Translation is everywhere (and not just on this blog). Translators want to know how it will affect their livelihood, and internal localization managers want to know how they can make it work for their translation strategy. Whether you're looking to assess the business applications of neural machine translation, or peek under the hood to see how all the gears fit together, these NMT videos can help you wrap your head around the rising tide that is neural machine translation.