← Back to context

Comment by Aachen

2 days ago

(not the downvoter)

I'm not sure if we're on the same page. I mean LLMs right? Not whatever Google Translate and DeepL use. The latter was better than gtrans when it launched, nowadays it's probably similar idk, and both are machine learning clearly, but the products(' quality) predates LLMs. They're not LLMs. They haven't noticeably improved since LLMs. Asking an LLM produces better output (so long as the LLM doesn't get sidetracked by the text's contents). Presumably also orders of magnitude higher energy consumption per word, even if you ignore training

I agree that Google Translate, now on par with DeepL's free product afaik (but I'm not a gtrans user so I don't know), is decent but not a full replacement for humans, and that LLMs aren't as good as human translations either (not just for attention reasons), but it's another big step forwards right?

I'm not sure what DeepL uses, but Google invented the Transformer architecture, the T in GPT, for Google Translate.

IIRC, the original difference between them was about the attention mask, which is akin to how the Mandelbrot and Julia fractals are the same formula but the variables mean different things; so I'd argue they're basically still the same thing, and you can model what an LLM does as translating a prompt into a response.

  • I didn't know that! I had heard they made transformers and (then-Open)AI used it in GPT, but that explains how come Google wasn't then first to market with an LLM product when the intended application was translation