AI Talk Series, Episode 2: The Pros and Cons of AI

by Elly Lee
11 Minute Read

AI Talk Series, Episode 2: The Pros and Cons of AI

Welcome back to our AI Talk Series, where we’ll be sharing AI insights and predictions from Lilt’s co-founders and experts. If you didn’t have a chance to check out Episode 1, here is a link to the article. This week, we discuss the pros and cons of AI for businesses and the localization industry. 

A little background about our experts: Lilt’s founders, Spence Green and John DeNero met at Google, while working on Google Translate’s program.  As researchers at Stanford and Berkeley, they both have experience working with natural language technology to make information accessible to everyone. They were amazed to learn that Google Translate wasn’t used for enterprise products and services inside the company and left to start their own company to address this need – Lilt. 

Lilt’s AI technology foundation is similar to ChatGPT and Google Translate, before our patented Contextual AI, Connector-first approach, and human-adapted feedback. We sat down with Spence and John to learn more about large language models and their thoughts on AI.

What are the core benefits and downsides of generative AI?

Frame 24-1John: Let’s start with the definition of generative AI. Generative AI is when a computer creates something—often from scratch or from a prompt. So creating a translation is generative AI, but it can also create images, videos, and other media formats. It's hard because these things that get generated are complex in structure. They're not just making a single decision.

Frame 25Spence: Some of the machine learning systems that people might be familiar with are for labeling images or sentences—where you're just categorizing things. Here, you're generating structured objects from scratch based on some input, and translation is obviously an example of that.

Frame 24-1John: And humans are so good at generating language that we don't necessarily realize how many ways it can go wrong. When you have a long sentence, there's just a long list of ways in which there can be something wrong with it. And that's what makes generative AI hard, and why it took a while to get the breakthroughs that we see today. There are a thousand different factors you have to get right about every sentence: word choice, agreement, order, and whether you have the right level of specificity or ambiguity. And now we have systems that can actually handle all this stuff at once, which is kind of amazing.

The core benefit is that there's a whole lot more generation work that should be done in the world than people really can do. Translation is a prime example, where there's so much content that should be translated into so many languages at publication-quality—so there's no degradation in the experience that people have when they read it in another language—but not all of it gets translated because there are only so many translators in the world to do the work. And the costs are such that it just doesn't happen.

That's really a shame. So I think the promise of generative AI is to get more done in the world and to make sure that all the important work that enables modern life happens by augmenting the people with the professional skills to do that work with AI. 

What are the promises and opportunities of AI that can affect the day-to-day operations for businesses? 

Frame 25Spence: For translation, we've been building these systems where you use AI to augment what people generate, and it helps solve the problem that when you're doing translation work, you get a new document to translate and you just have a blank page—and that's where you start. Versus now, you can start with the machine's best prediction of what that translation is. And then it refines its predictions as you work together. I think that the broadening of this generative AI technology will broaden to other types of work, whether it's writing marketing articles or creating training documentation..

So I think that there are a lot of opportunities for information creation that typically started with a blank cursor or you know, a blank canvas in Photoshop, where now you can start with a machine prediction that gets you going. That just makes you more efficient in the work that you do.

Frame 24-1John: Yeah, exactly. I think that anybody in a business setting knows that there are important things that just get delayed over and over again because there just isn't the capacity to do everything that should be done. What happens with generative AI, which is interesting, is that sometimes you ask it to generate a document and it does it really well, and then sometimes it misses something important. Sometimes it's just about slightly the wrong topic or it doesn't capture the main idea that you wanted to convey.

So there is a role for people to figure out whether it did the right thing. And that could be very quick. In some cases, you read the first few sentences and see that one whole section is perfect. But then you go down and you say, “Oh, actually, I wanted something else for this other part.”

And you have to go in and rewrite it. That's just part of working with a generative AI system and having a plan for figuring out how to validate and revise its output. That's the kind of thing that you can solve with the right process. At Lilt, we have quite an extensive process for making sure that there are no issues with the actual final translations, even if there were in the original generative AI output.

That validation process is really critical and it's true for other things. You know, especially Spence, when we’re talking about generating marketing content or training content, people do have to be part of the process of supervising the AI and, and correcting its work just like you would with a junior employee.

Frame 25Spence: Totally. It’s the same concept you have of an editor in the newsroom or a senior partner revising or checking the work of a junior partner in a law firm. Same idea. Only now the junior partner is an AI system and not a human being.

What are some problems with GenAI that we should be wary of? 

Frame 24-1John: The biggest risk I see is that you have to know how to use it. I think that it's not like a junior employee in that you can give them broad directions and they'll figure it out because it's an AI system, not a person. And so it is very sensitive to the prompts and inputs you provide it. There is really quite a bit of care in engineering and design that goes into prompting the system so that you can get useful output, which you can work with. 

That's a new branch of applied artificial intelligence—figuring out how to take some big generative system and actually get it to produce the most useful results so that it can enable increased efficiency. So the idea that you can just ask it any question you want and it's always gonna do the right thing is the wrong mindset. Instead, you need to really take great care to prompt the system in order to generate the right results.

Frame 25Spence: Yeah. One of the things I've been thinking about is that there are some products now that will summarize email chains or a meeting—and those types of work products are used to inform decisions.

And I think there will increasingly be AI-generated text, memos, presentations, and summaries that inform decision-making. People have this bias that’s well-known called algorithm aversion, which is they tend to have a higher standard for machines than they do for people. 

So if a person makes a mistake, they understand that. But if machines make mistakes, people have a much higher standard. So this has been one of the challenges with self-driving cars, for example. There are a lot of car accidents on the road all the time. And everybody knows that humans make driving mistakes and have accidents, but as soon as a machine has a wreck, it's on the front page news.

I wonder as these systems get into day-to-day decision making and business—and it's certainly going to be kind of edge cases—if these mistakes will be magnified in some way because it's a machine-driven decision. I wonder how businesses will manage that.

Frame 24-1John: This is a great point. I think people will be very critical of these systems if they cause problems—even if those problems might have existed with people and without the technology. And actually, I think that's great. I think where we should end up with AI is to make better decisions than what we made without AI. 

Similarly, with self-driving cars, I think the aspiration should be that there are a lot less accidents on the road in the future than there are today. The same with translation. The outcomes should be that with AI assistance, the translation quality is more consistent and better—which is something that we observe at Lilt. 

What happens when you're coaching people to do their job better over time, each person can only write so many summaries or make so many decisions. And so you really can't invest 50 years of training somebody just for them to do a job for a year. But with AI, because it's so scalable, once you've built it, you can have it drive many cars or translate many sentences. It makes sense to put a tremendous amount of investment into the quality of what it generates.

So, that's why Lilt exists. We can invest and concentrate our expertise in how to make translations work well. We have a big research team here in order to do that, which is way more than what you would invest if you were just training one small group of translators.

Because once we build the system, we can use it for a lot of content. It's well justified. And so I think that's the same story that goes with summarization. There's been an unbelievable number of researchers and papers that have worked on figuring out how to summarize one document into a short description.

But that all makes sense because once that technology works, it can be used so broadly over and over again that it justifies the investment of effort. 

So, yeah, I think it's okay to hold these systems to a really high standard. I think that's what we should expect from them, but we're not there in every case now. People should just be aware that it requires some amount of expertise in order to get the system to do what they want it to do.

* * *



Thanks for chatting, Spence and John! As businesses continue to embrace the change and opportunities that lie ahead, it will become increasingly important for global teams and leaders to invest in AI technologies to remain competitive. Tune in for the next episode of our AI Series for a deeper exploration of AI, large language models, and their impact on the translation industry.

Lilt (600x154) - Light blue (4)