Lilt Labs

Learn and explore everything you need to know about global experience

The Future of Language Work: Business Perspectives

2

It’s not an everyday occurrence that translators and technology professionals come together and discuss the state of the language industry, but that’s exactly what happened last month in Santa Clara, CA. The event, The Future of Language Work: Enterprise, Technology, and Translation Professional Perspectives, was hosted by translation startup, Lilt, and featured two panelist discussions on topics ranging from language technology advancements to the effect of globalization on translation demand. While the first panelist discussion focused on the past, present and future of translation technology, the second panelist discussion turned to look at how technology is affecting language work. The panel, moderated by Katie Botkin, Managing Editor of Multilingual Magazine, included panelists David Snider, Globalization Architect at LinkedIn, Anna Schlegel, Sr. Director of Globalization Programs and Information Strategy at NetApp, Jost Zetzsche, Localization Consultant and Writer at the International Writers’ Group and Max Troyer, Assistant Professor and Program Coordinator, Translation & Localization Management at the Monterey Institute of International Studies.

The Future of Language Work: Enterprise, Technology, and Translation Professional Perspectives

2

Around 100 professionals from the language and technology industries came together in Santa Clara, CA last month to discuss the future of language work. The event, The Future of Language Work: Enterprise, Technology, and Translation Professional Perspectives, was hosted by translation startup, Lilt, and featured two panelist discussions on topics ranging from language technology advancements to the effect of globalization on translation demand.

Case Study: Zendesk + Lilt

In a recent case study with Zendesk, they talked to us about using a combination of human and machine translation to translate their large database of support content.

A Whole New Level of Productivity: Introducing Our Improved Editor

3

We take translator feedback about our app very seriously. Which is why when we started looking at how to improve our editor, we combed through all of your suggestions and requests from the past year. We were looking to solve some big pain points that many of you pointed out as hindrances to your productivity. After much hard work, translators and project managers were invited to test out the new features and interface. We listened to feedback, made adjustments and tested some more. The result is a new editor designed to save you heaps of time. It’s quicker, smoother and more efficient so you can accomplish more than ever before! We really hope you like it. Keep reading to take a tour of the newest features we’ve added that will make your life a whole lot easier.

Is Post-editing Dead?

3

Many of us who have had the displeasure of post-editing a translation created by a machine would agree that the process is slow, tedious and out-of-style. However, there are always two sides of the story. So, we decided to ask our Twitter followers on their opinion of the post-editing process. The results? 47% of translators would rather go to the dentist than post-edit.

Team Lilt Spotlight: Josie Pang

1

What is your role at Lilt? I work on sales, marketing and customer success at Lilt. I’m incredibly excited by our product and its potential, so I’m thrilled to be working alongside our translators and sharing our technology with the world.

Team Lilt Spotlight: Marina Lee

1

This week we’re chatting with Lilt team member, Marina Lee. Keep reading to learn more about Marina and don’t forget to say hello to her at ATA 58!

Case Study: First Large-Scale Application of Auto-Adaptive MT

2

Combining Machine Translation (MT) with auto-adaptive Machine Learning (ML) enables a new paradigm of machine assistance. Such systems learn from the experience, intelligence and insights of their human users, improving productivity by working in partnership, making suggestions and improving accuracy over time. The net result is that human reviewers produce far higher volumes of content, with nearly the same level of quality, for a fraction of the time and cost. Machine assistance can save customers up to one half (or more) of the price of traditional high-quality human translation services. Or, if you’ve been used to machine translation alone and have been unhappy with the results, watch your translation quality rise dramatically with a marginal increase in price.

Happy Translator's Day

1

Happy Translator’s Day, my fellow Translators and Interpreters! On this day, I would like to recognize and commend fellow translators for the work we do and what it requires, and address any layperson’s misconception that a fluent bilingual may as well serve as a qualified translator. That this is not so may be so (painfully) obvious to us, but the confusion persists. Translation and interpretation are very specific skills which, just like any specialized capability, requires certain cognitive and operational faculties. Some of these are: a quick aptitude for understanding complex and diverse subjects; an analytical mind, and extensive research ability — one has to analyze complex information, deduce what additional information they may need, and identify the resources of where and how to find it.

Advanced Terminology Management in Lilt

1

Our new advanced termbase editor lets you manage terminology more effectively by keeping terms organized with meta information that you can customize. Import terminology with meta fields or add your own fields. Your terms will appear in both the Lexicon and the Editor suggestions and help you increase consistency and quality.

Keeping Your Data Secure in Lilt

2

In a world where data hacks and breaches seem to make front-page news more often than we’d like, a common question translators and businesses have about Lilt is usually: is my data safe? No need to worry. Lilt was built with that concern in mind. Read the answers below to some common questions about security in Lilt. Is my data shared with anyone? Your data is private to your Lilt account. It is never shared with other accounts and/or users. When you upload a translation memory or translate a document, those translations are only associated with your account. For Business customers, translation memories can be shared across your projects, but they are not shared with other users or third parties.

Cognitive Processes of Interpreting and Translation

2

Ever wonder what happens in the process of translation/interpretation “under the hood?” Let’s look at the mode of interpretation first. Cognitive processes that take place in a simultaneous interpreter’s mind and brain are intense and all happening nearly at the same time. Neurons are firing in all directions, igniting different cognitive processing circuitry. The brain is literally “on fire,” as a Russian cognitive scientist puts it. Consecutive interpreting is different from simultaneous from the perspective of the cognitive science, in that the stages of conversion of meaning and reproduction are delayed from the stage of intake and deciphering of the message. That does not, however, make the process easier.

What We’re Reading: Domain Attention with an Ensemble of Experts

1

A major problem in effective deployment of machine learning systems in practice is domain adaptation — given a large auxiliary supervised dataset and a smaller dataset of interest, using the auxiliary dataset to increase performance on the smaller dataset. This paper considers the case where we have K datasets from distinct domains and adapting quickly to a new dataset. It learns K separate models on each of the K datasets and treats each as experts. Then given a new domain it creates another model for this domain, but in addition, computes attention over the experts. It computes attention via a dot product that computes the similarity of the new domain’s hidden representation with the other K domains’ representations.

Making the Most of Your First Project in Lilt

2

Lilt was designed to maximize translation productivity. So you’ll want to get started using quickly, rather than spending your time learning how to use it. The interface and user experience differ from conventional CAT tools. Change is hard. We know. But we’ve designed the system with the goal of making you productive in less than 10 minutes. The articles in our Knowledge Base will turn you into a power user, but here are the basics of what you need to know to get started…

What We’re Reading: Learning to Decode for Future Success

1

When doing beam search in sequence to sequence models, one explores next words in order of their likelihood. However, during decoding, there may be other constraints we have or objectives we wish to maximize. For example, sequence length, BLEU score, or mutual information between the target and source sentences. In order to accommodate these additional desiderata, the authors add an additional term Q onto the likelihood capturing the appropriate criterion and then choose words based on this combined objective.

FREE THE TRANSLATORS! How Adaptive MT turns post-editing janitors into cultural consultants

4

Originally posted on LinkedIn by Greg Rosner. I saw the phrase “linguistic janitorial work” in this Deloitte whitepaper on “AI-augmented government, using cognitive technologies to redesign public sector work”, used to describe the drudgery of translation work that so many translators are required to do today through Post-editing of Machine Translation. And then it hit me what’s really going on. The sad reality over the past several years is that many professional linguists, who have decades of particular industry experience, expertise in professional translation and have earned degrees in writing, whose jobs have been reduced to sentence-by-sentence clean-up of translations that flood out of Google Translate or other Machine Translation (MT) systems.

The Augmented Translator: How Their Jobs are Changing and What They Think About It

3

Written by Kelly Messori The idea that robots are taking over human jobs is by no means a new one. Over the last century, the automation of tasks has done everything from making a farmer’s job easier with tractors to replacing the need for cashiers with self-serve kiosks. More recently, as machines are getting smarter, discussion has shifted to the topic of robots taking over more skilled positions, namely that of a translator. A simple search on the question-and-answer site Quora reveals dozens of inquiries on this very issue. While a recent survey shows that AI experts predict that robots will take over the task of translating languages by 2024. Everyone wants to know if they’ll be replaced by a machine and more importantly, when will that happen?

What We’re Reading: Neural Machine Translation with Reconstruction

1

Neural MT systems generate translations one word at a time. They can still generate fluid translations because they choose each word based on all of the words generated so far. Typically, these systems are just trained to generate the next word correctly, based on all previous words. One systematic problem with this word-by-word approach to training and translating is that the translations are often too short and omit important content. In the paper Neural Machine Translation with Reconstruction, the authors describe a clever new way to train and translate. During training, their system is encouraged not only to generate each next word correctly but also to correctly generate the original source sentence based on the translation that was generated. In this way, the model is rewarded for generating a translation that is sufficient to describe all of the content in the original source.

What We’re Reading: Single-Queue Decoding for Neural Machine Translation

1

The most popular way of finding a translation for a source sentence with a neural sequence-to-sequence model is a simple beam search. The target sentence is predicted one word at a time and after each prediction, a fixed number of possibilities (typically between 4 and 10) is retained for further exploration. This strategy can be suboptimal as these local hard decisions do not take the remainder of the translation into account and can not be reverted later on.

Technology for Interactive MT

2

This article describes the technology behind Lilt’s interactive translation suggestions. The details were first published in an academic conference paper, Models and Inference for Prefix-Constrained Machine Translation. Machine translation systems can translate whole sentences or documents, but they can also be used to finish translations that were started by a person — a form of autocomplete at the sentence level. In the computational linguistics literature, predicting the rest of a sentence is called prefix-constrainedmachine translation. The prefix of a sentence is the portion authored by a translator. A suffix is suggested by the machine to complete the translation. These suggestions are proposed interactively to translators after each word they type. Translators can accept all or part of the proposed suffix with a single keystroke, saving time by automating the most predictable parts of the translation process.

Case Study: SDL Trados

1

Abstract: We compare human translation performance in Lilt to SDL Trados, a widely used computer-aided translation tool. Lilt generates suggestions via an adaptive machine translation system, whereas SDL Trados relies primarily on translation memory. Five in-house English–French translators worked with each tool for an hour. Client data for two genres was translated. For user interface data, subjects in Lilt translated 21.9% faster. The top throughput in Lilt was 39.5% higher than the top rate in Trados. This subject also achieved the highest throughput in the experiment: 1,367 source words per hour. For a hotel chain data set, subjects in Lilt were 13.6% faster on average. Final translation quality is comparable in the two tools.

2017 Machine Translation Quality Evaluation Addendum

14

This post is an addendum to our original post on 1/10/2017 entitled 2017 Machine Translation Quality Evaluation. Experimental Design We evaluate all machine translation systems for English-French and English-German. We report case-insensitive BLEU-4 [2], which is computed by the mteval scoring script from the Stanford University open source toolkit Phrasal. NIST tokenization was applied to both the system outputs and the reference translations.

Machine Translation Tools: Comprehensive BLEU Evaluation

2

The language services industry offers an intimidating array of machine translation options. To help you separate the truly innovative from the middle-dwellers, your pals here at Lilt set out to provide reproducible and unbiased evaluations of these options using public data sets and a rigorous methodology. This evaluation is intended to assess machine translation not only in terms of baseline translation quality, but also regarding the quality of domain adapted systems where available. Domain adaptation and neural networks are the two most exciting recent developments in commercially available machine translation. We evaluate the relative impact of both of these technologies for the following commercial systems: