Comment by jchw
2 days ago
Definitely contextually dependent: things like news articles are probably the gold standard for machine translation, but creative works and particularly dense reading material like novels seem like they will remain a tougher nut to crack for even LLMs. It's not hopeless, but it's definitely way too early for anyone to be firing all of their translators.
> It's capable of just dropping out entire paragraphs.
I suspect, though, that issues like this can be fixed by improving how we interface with the LLM for the purposes of translation. (Closed-loop systems that use full LLMs under the hood but output a translation directly as-if they are just translation models probably already have solved this kind of problem by structuring the prompt carefully and possibly incrementally.)
No comments yet
Contribute on Hacker News ↗