Here’s how you should be thinking about machine translation in 2019

by Adrienne Lumb
4 Minute Read

Request a demoTranslation is a complex art form – one we highly respect. The manual process involved makes it difficult to keep up with changing technology and progressive business practices. Modernizing seems formidable. Change breeds uncertainty. It’s understandable for localization teams to stick with legacy vendors, a static stack, and the same workflows.

However, there are pernicious effects of sticking to processes while those around you evolve. Localization teams are frequently left out of critical early-stage planning and development phases. Internationalizing software, legal documents, and engineering stacks are one of the final considerations as opposed to one of the first. Scrutiny over a single legal document could hold up a go-to-market plan by months.

Pioneering a new approach to translation

Work is easier and more efficient by augmenting human translation with machine translation (MT). It’s a type of technology that’s meant to work with translators, not against them, stimulating unbounded productivity gains. It’s not a matter of either (human), or (machine). A human-in-the-loop approach to translation will be omnipresent for businesses in the coming years, as it’s simply the most effective model. Human-only translation is laborious. Machine-only methods are fast but sacrifice quality. The hybrid model of machine and human is unequivocally the best solution for quality, speed, and cost.

Four essential mentalities, old vs. new

In conversations with localization professionals, we come across common misconceptions about how MT works, and specific ways to use it. It’s challenging to think of new applications and workflows when the capabilities of a system are unclear. Here are some old topics we still hear during our conversations, as well as new perspectives that will yield better results for you and your team.

Old: We're evaluating the best "engine" for content type X.

It’s natural to want to assess what's “under the hood” when comparing tools, but examining the minutiae between various MT systems is counter-productive. Google Translate, Amazon Translate, Microsoft Translator Hub – these systems all use a variant of the same algorithm. There’s not much difference in the variants, so closely evaluating them is unnecessary.

New: We're developing a data segmentation strategy for MT training.

What distinguishes one MT system from another is how you've trained it on your data. An algorithmic MT system trained on a specific domain is much more effective than a generalist approach, so segmenting your data is instrumental. Focus on applying an adaptive MT system across your domains and dividing your data amongst them, rather than working with one large translation memory (TM). Also, adaptation is highly effective on language variants (such as Canadian French vs. European French), and thus should be incorporated into your data matrix.

Old: We're evaluating MT on low-value content.

Businesses often try out MT systems on low-risk or deprioritized content as a sort of precautionary test-drive. This risk-averse approach means relying on a TM to do a majority of the work in making predictions, which is a poor utilization of MT technology.

New: We have a one to two-year plan for MT to touch every word we translate.

Forward-thinking organizations realize MT systems will become ubiquitous in the localization industry soon. Teams experience impactful results – and realize the full potential of MT capabilities – by utilizing it with high priority content from the start. Create a strategic plan to produce all localized content through a neural MT, as opposed to segmenting it to specific use cases that aren’t driving business results.

Old: We're asking our services vendor for a discount.

There are few opportunities for productivity gains with traditional translation processes, so companies use rate negotiations as leverage with language service providers (LSP's). It's all they have to work with. These rate discounts end up impacting the translators, who must either work for less than their standard rate or risk their employment with the vendor entirely.

New: We're creating a plan for monetizing translation productivity.

Businesses should review neural MT as a means for efficiency and productivity gains. Translation speed, accuracy, and faster go-to-market processes are all measurable. Instead of seeking rate discounts from outdated methods, energize your teams by looking for ways to make their work faster, better, and more efficient. You’ll experience quantifiable results, rather than an arbitrary rate discount.

Old: We're adding an “engine” into our TMS stack.

MT systems have dynamic and advanced applications, but legacy enterprise localization stacks have thin connectors for them. This setup restricts fully utilizing an MT system, and there are two main problems with this legacy approach:

1) It’s non-interactive. This approach treats MT as a backup TM. If TM matches aren’t available, an MT system will provide a suggestion as a starting point for the translator. Teams lose the productivity gains offered by interactive MT, as they’re trapped in post-editing workflows.

2) You’re restricting data insights. Many TMS’s are set up for data import only. Or, if they do export data, it’s periodically – say, when a project is complete. This approach limits the rate and degree of adaptation possible.

New: We're redesigning our localization stack around MT

Prepare for MT systems to touch all content that goes through your database, rather than as an add-on to a TM. Engineer your stack around MT, and you’ll future-proof your translation process, and take full advantage of all the valuable insights your data has to offer. Treat it like your core operating system, and it will perform like one.

Monetize your translation productivity

Let’s talk about how neural MT can augment your translation processes and provide faster, better ways to work.

Request a demo