← Back to context

Comment by kouteiheika

1 day ago

> As my mom retired from being a translator, she went from typewriter to machine-assisted translation with centralised corpus-databases. All the while the available work became less and less, and the wages became lower and lower.

She was lucky to be able to retire when she did, as the job of a translator is definitely going to become extinct.

You can already get higher quality translations from machine learning models than you get from the majority of commercial human translations (sans occasional mistakes for which you still need editors to fix), and it's only going to get better. And unlike human translators LLMs don't mangle the translations because they're too lazy to actually translate so they just rewrite the text as that's easier, or (unfortunately this is starting to become more and more common lately) deliberately mistranslate because of their personal political beliefs.

While LLMs are pretty good, and likely to improve, my experience is OpenAI's offerings *absolutely* make stuff up after a few thousand words or so, and they're one of the better ones.

It also varies by language. Every time I give an example here of machine translated English-to-Chinese, it's so bad that the responses are all people who can read Chinese being confused because it's gibberish.

And as for politics, as Grok has just been demonstrating, they're quite capable of whatever bias they've been trained to have or told to express.

But it's worse than that, because different languages cut the world at different joints, so most translations have to make a choice between literal correctness and readability — for example, you can have gender-neutral "software developer" in English, but in German to maintain neutrality you have to choose between various unwieldy affixes such as "Softwareentwickler (m/w/d)" or "Softwareentwickler*innen" (https://de.indeed.com/karriere-guide/jobsuche/wie-wird-man-s...), or pick a gender because "Softwareentwickler" by itself means they're male.

  • no, "Softwareentwickler" doed NOT mean the person is male. It's the correct german form for either male OR generic. (generisches Maskulinum)

    • Same is true in Polish, but the feminist movement insists this is not acceptable and tries to push feminatives.

      I personally have no strong opinion on this, FWIW, just confirming GP's making a good point there. A translated word or phrase may be technically, grammatically correct, but still not be culturally correct.

  • > While LLMs are pretty good, and likely to improve, my experience is OpenAI's offerings absolutely make stuff up after a few thousand words or so, and they're one of the better ones.

    That's not how you get good translations from off-the-shelf LLMs! If you give a model the whole book and expect it to translate it in one-shot then it will eventually hallucinate and give you bad results.

    What you want is to give it a small chunk of text to translate, plus previously translated context so that it can keep the continuity.

    And for the best quality translations what you want is to use a dedicated model that's specifically trained for your language pairs.

    > And as for politics, as Grok has just been demonstrating, they're quite capable of whatever bias they've been trained to have or told to express.

    In an open ended questions - sure. But that doesn't apply to translations where you're not asking the model to come up with something entirely by itself, but only getting it to accurately translate what you wrote into another language.

    I can give you an example. Let's say we want to translate the following sentence:

    "いつも言われるから、露出度抑えたんだ。"

    Let's ask a general purpose LLMs to translate it without any context (you could get a better translation if you'd give it context and more instructions):

    ChatGPT (1): "Since people always comment on it, I toned down how revealing it is."

    ChatGPT (2): "People always say something, so I made it less revealing."

    Qwen3-235B-A22B: "I always get told, so I toned down how revealing my outfit is."

    gemma-3-27b-it (1): "Because I always get told, I toned down how much skin I show."

    gemma-3-27b-it (2): "Since I'm always getting comments about it, I decided to dress more conservatively."

    gemma-3-27b-it (3): "I've been told so often, I decided to be more modest."

    Grok: "I was always told, so I toned down the exposure."

    And how humans would translate it:

    Competent human translator (I can confirm this is an accurate translation, but perhaps a little too literal): "Everyone was always saying something to me, so I tried toning down the exposure."

    Activist human translator: "Oh those pesky patriarchal societal demands were getting on my nerves, so I changed clothes."

    (Source: https://www.youtube.com/watch?v=dqaAgAyBFQY)

    It should be fairly obvious which one is the biased one, and I don't think it's the Grok one (which is a little funny, because it's actually the most literal translation of them all).

    • >> While LLMs are pretty good, and likely to improve, my experience is OpenAI's offerings absolutely make stuff up after a few thousand words or so, and they're one of the better ones.

      > That's not how you get good translations from off-the-shelf LLMs! If you give a model the whole book and expect it to translate it in one-shot then it will eventually hallucinate and give you bad results.

      You're assuming something about how I used ChatGPT, but I don't know what exactly you're assuming.

      > What you want is to give it a small chunk of text to translate, plus previously translated context so that it can keep the continuity

      I tried translating a Wikipedia page to support a new language, and ChatGPT rather than Google translate because I wanted to retain the wiki formatting as part of the task.

      LLM goes OK for a bit, then makes stuff up. I feed in a new bit starting from its first mistake, until I reach a list at which point the LLM invented random entries in that list. I tried just that list in a bunch of different ways, including completely new chat sessions and the existing session, it couldn't help but invent things.

      > In an open ended questions - sure. But that doesn't apply to translations where you're not asking the model to come up with something entirely by itself, but only getting it to accurately translate what you wrote into another language.

      "Only" rather understates how hard translation is.

      Also, "explain this in Fortnite terms" is a kind of translation: https://x.com/MattBinder/status/1922713839566561313/photo/3

This is just not true, LLMs struggle very hard with even basic recursive questions, nuances and dialects

  • But as a customer cannot know that, they will tend to consume (and mostly trust) whatever LLM result is given.

    • Yes indeed. After a few years humans will be trained to accept the low tier AI translations as the new normal, hopefully I'm dead by then already.

      1 reply →

Maybe for dry text. Translation of art is art too and there's no such thing as higher quality art.

  • I’m intrigued by this statement. It seems obvious to me that some artworks are ‘higher quality’ than others. You wouldn’t, I’d presume, consider the Sistine Chapel or the Mona Lisa to be the same quality as a dickbutt scribbled on a napkin?

    • >You wouldn’t, I’d presume, consider the Sistine Chapel or the Mona Lisa to be the same quality as a dickbutt scribbled on a napkin?

      To paraphrase Frank Zappa...Art just needs a frame. If you poo on a table...not art. If you declare 'my poo on the table will last from the idea, until the poo dissappears', then that is art. Similarly, banksy is just graffiti unless you understand (or not) the framing of the work.