Lilt Labs

Learn and explore everything you need to know about global experience

Announcing Lilt's Series C Financing

4 Minute Read

Mission > GX In ordinary times, a company milestone announcement would begin with a proclamation of progress and achievement. But these are extraordinary times, and as a society we face an unprecedented array of challenges: the lasting effects of a global pandemic, a ground war in Europe, rising inflation, labor and energy shortages, and fractious politics in most of the world. Easing this societal and economic turmoil is most important and should be our focus. Lilt has a role to play in addressing some of these challenges.

Read More

Statement on Ukraine

1 Minute Read

We at Lilt are saddened to witness the events that have unfolded in Ukraine over the past weeks, and we unequivocally condemn Russia’s acts of aggression. To our Ukrainian colleagues, our Ukrainian translator community, and all those with ties to the country, our support for you is unconditional.

Read More

Introducing Ascend 2020, Lilt's Annual Localization Conference

3 Minute Read

 

Read More

Announcing Lilt's Series B Financing

6 Minute Read

The languages one learns as a child will influence nearly every aspect of their life: their community, their access to information, and even their career prospects. I observed this most acutely while living in the Middle East, where I met bright and ambitious people who were often cut off from intellectual work because they didn't speak English. While the capacity for language is shared by all of humanity and one of the most fascinating aspects of human intelligence, language differences can divide us socially and economically.

Read More

Scaling Localization With Artificial Intelligence and Automation

3 Minute Read

A few weeks ago, our friends over at GALA jointly hosted a webinar with our CEO Spence Green called "Scaling Localization With Artificial Intelligence and Automation."

Read More

Lilt Adds 40th Language!

3 Minute Read

Lilt is happy to announce the addition of Bulgarian and Slovenian to our platform today. With this, Lilt officially supports 40 languages; or 50 including variations of languages such as Castilian and Latin American Spanish.

Read More

Announcing Lilt's Series A Financing

8 Minute Read

Today I’m pleased to announce that we raised $9.5M in new funding led by Sequoia Capital. Bill Coughran, partner at Sequoia, will join our board. Our existing investors‒Redpoint Ventures, Zetta Venture Partners, and XSeed Capital‒all participated in the round. Series A funding indicates two milestones in an enterprise company’s life: strong revenue and momentum, and a compelling plan for the deployment of new capital. It also marks the start of a new partnership, in this case among us, Bill, and Sequoia. We are thrilled because Bill is that unique leader who has contributed to science, managed large technical teams, and led businesses. Sequoia has also funded the businesses we admire most, among them Google, Apple, and Stripe. In this post, I’ll describe what we’ve achieved, what we plan to do, and why we are certain that Bill is the right partner to help us do it. How We Got Here

Read More

What We’re Reading: Domain Attention with an Ensemble of Experts

1 Minute Read

A major problem in effective deployment of machine learning systems in practice is domain adaptation — given a large auxiliary supervised dataset and a smaller dataset of interest, using the auxiliary dataset to increase performance on the smaller dataset. This paper considers the case where we have K datasets from distinct domains and adapting quickly to a new dataset. It learns K separate models on each of the K datasets and treats each as experts. Then given a new domain it creates another model for this domain, but in addition, computes attention over the experts. It computes attention via a dot product that computes the similarity of the new domain’s hidden representation with the other K domains’ representations.

Read More

What We’re Reading: Learning to Decode for Future Success

1 Minute Read

When doing beam search in sequence to sequence models, one explores next words in order of their likelihood. However, during decoding, there may be other constraints we have or objectives we wish to maximize. For example, sequence length, BLEU score, or mutual information between the target and source sentences. In order to accommodate these additional desiderata, the authors add an additional term Q onto the likelihood capturing the appropriate criterion and then choose words based on this combined objective.

Read More

What We’re Reading: Neural Machine Translation with Reconstruction

1 Minute Read

Neural MT systems generate translations one word at a time. They can still generate fluid translations because they choose each word based on all of the words generated so far. Typically, these systems are just trained to generate the next word correctly, based on all previous words. One systematic problem with this word-by-word approach to training and translating is that the translations are often too short and omit important content. In the paper Neural Machine Translation with Reconstruction, the authors describe a clever new way to train and translate. During training, their system is encouraged not only to generate each next word correctly but also to correctly generate the original source sentence based on the translation that was generated. In this way, the model is rewarded for generating a translation that is sufficient to describe all of the content in the original source.

Read More