Product update: improved translation review categories

by Adrienne Lumb
2 Minute Read

We’re pleased to announce a new feature in our interface that improves the translation review process. Previously, reviewers could make general text comments about errors and introduce categories by using hashtags to indicate the type of error (such as #punctuation or #mistranslation). However, it was still a manual process to collect the different hashtags and identify precisely where the error occurred.

Now, when reviewers find an error in a translation, they can categorize the type of error made - allowing them the ability to provide more detailed feedback. With the new experience, reviewers can select the kind of error (or errors) from a checklist of commonly made mistakes.

The various error types are now fields within the revision report document. The reviewer can also now see exact error placement along with the specific error type, providing a birds-eye view over the kinds of errors that occur most frequently in their projects. From here, translation managers can adjust their text specifications, and experience even greater efficiency gains.

We love making workflow improvements such as this one, and the structure it provides in feedback loops.

Reviewers will now see a set of checkboxes to indicate the type of error found, and will still have the option to leave plain text comments as well. Here’s the new experience:

"The addition of translation review categories makes it much easier to see trends in the type of errors," said Translation Manager Phoebe Killick. "Previously, I had to manually look through the delivery revision report to find trends in the types of errors. Now, I can immediately aggregate and filter by types of errors to find individual translators who are consistently making the same errors, or update the instructions for all translators where there are frequent similar errors across the board."

We’d love your feedback on the new experience. Let us know your thoughts @LiltHQ on Twitter.

Localization is profitable. You can measure it.

Having a hard time tying localization to ROI? We've got you. The Ultimate Guide to Measuring Localization ROI covers everything you need to know to tie your market expansion efforts to business goals and revenue.

Get the guide
null

Interactive and Adaptive Computer Aided Translation

5 Minute Read

Originally published on Kirti Vashee’s blog eMpTy Pages. Lilt is an interactive and adaptive computer-aided translation tool that integrates machine translation, translation memories, and termbases into one interface that learns from translators. Using Lilt is an entirely different experience from post-editing machine translations — an experience that our users love, and one that yields substantial productivity gains without compromising quality. The first step toward using this new kind of tool is to understand how interactive and adaptive machine assistance is different from conventional MT, and how these technologies relate to exciting new developments in neural MT and deep learning. Interactive MT doesn’t just translate each segment once and leave the translator to clean up the mess. Instead, each word that the translator types into the Lilt environment is integrated into a new automatic translation suggestion in real time. While text messaging apps autocomplete words, interactive MT autocompletes whole sentences. Interactive MT actually improves translation quality. In conventional MT post-editing, the computer knows what segment will be translated, but doesn’t know anything about the phrasing decisions that a translator will make. Interactive translations are more accurate because they can observe what the translator has typed so far and update their suggestions based on all available information.

Read More

What We’re Reading: Neural Machine Translation with Reconstruction

1 Minute Read

Neural MT systems generate translations one word at a time. They can still generate fluid translations because they choose each word based on all of the words generated so far. Typically, these systems are just trained to generate the next word correctly, based on all previous words. One systematic problem with this word-by-word approach to training and translating is that the translations are often too short and omit important content. In the paper Neural Machine Translation with Reconstruction, the authors describe a clever new way to train and translate. During training, their system is encouraged not only to generate each next word correctly but also to correctly generate the original source sentence based on the translation that was generated. In this way, the model is rewarded for generating a translation that is sufficient to describe all of the content in the original source.

Read More