← Back to context

Comment by ApolloFortyNine

5 days ago

I've seen this happen as well with o3-mini, but I'm honestly not sure what triggered it. I use it all the time but have only had it switch to Chinese during reasoning maybe twice.

I've seen Grok sprinkle random Chinese characters into responses I asked for in ancient Greek and Latin.

  • I get strange languages sprinkled through my Gemini responses, including some very obscure ones. It just randomly changes language for one or two words.

    • Is it possible the "vector" is more accurate in another language? Like espirit d'esclair or schadenfreude, or any number of other things that are a single word in a language but paragraphs or more in others?

      1 reply →

Isn't it just it getting increasingly incoherent as non-English data fraction increases?

Last I checked, none of open weight LLMs has languages other than English as its sole dominant language represented in the dataset.