Comment by ben_w
10 hours ago
>> While LLMs are pretty good, and likely to improve, my experience is OpenAI's offerings absolutely make stuff up after a few thousand words or so, and they're one of the better ones.
> That's not how you get good translations from off-the-shelf LLMs! If you give a model the whole book and expect it to translate it in one-shot then it will eventually hallucinate and give you bad results.
You're assuming something about how I used ChatGPT, but I don't know what exactly you're assuming.
> What you want is to give it a small chunk of text to translate, plus previously translated context so that it can keep the continuity
I tried translating a Wikipedia page to support a new language, and ChatGPT rather than Google translate because I wanted to retain the wiki formatting as part of the task.
LLM goes OK for a bit, then makes stuff up. I feed in a new bit starting from its first mistake, until I reach a list at which point the LLM invented random entries in that list. I tried just that list in a bunch of different ways, including completely new chat sessions and the existing session, it couldn't help but invent things.
> In an open ended questions - sure. But that doesn't apply to translations where you're not asking the model to come up with something entirely by itself, but only getting it to accurately translate what you wrote into another language.
"Only" rather understates how hard translation is.
Also, "explain this in Fortnite terms" is a kind of translation: https://x.com/MattBinder/status/1922713839566561313/photo/3
No comments yet
Contribute on Hacker News ↗