← Back to context

Comment by jarenmf

6 days ago

Talking with Gemini in Arabic is a strange experience; it cites Quran - says alhamdullea and inshallah, and at one time it even told me: this is what our religion tells us we should do. Ii sounds like an educated religious Arab speaking internet forum user from 2004. I wonder if this has to do with the quality of Arabic content it was trained on and can't help but think whether AI can push to radicalize susceptible individuals

Based on the code that it's good at, and the code that it's terrible at, you are exactly right about LLMs being shaped by their training material. If this is a fundamental limitation I really don't see general purpose LLMs progressing beyond their current status is idiot savants. They are confident in the face of not knowing what they don't know.

Your experience with Arabic in particular makes me think there's still a lot of training material to be mined in languages other than English. I suspect the reason that Arabic sounds 20 years ago is that there's a data labeling bottleneck in using foreign language material.

  • I've had a suspicion for a bit that, since a large portion of the Internet is English and Chinese, that any other languages would have a much larger ratio of training material come from books.

    I wouldn't be surprised if Arabic in particular had this issue and if Arabic also had a disproportionate amount of religious text as source material.

    I bet you'd see something similar with Hebrew.

    • I think therein lies another fun benchmark to show that LLM don't generalize: ask the llm to solve the same logic riddle, only in different languages. If it can solve it in some languages, but not in others, it's a strong argument for just straightforward memorization and next token prediction vs true generalization capabilities.

      1 reply →

    • While computer languages are different and significantly simpler than human languages, LLMs as coding agents don't seem phased by being told to implement in one language based on an example in another. Before they were general purpose chat bots, LLMs were used in language translation.

  • Humans are also shaped by the training material… maybe all intelligence is.

    Talk to people with extreme views and you realize they are actually rational, but the world they live in is not normal or typical. When you apply perfectly sound logic to a deformed foundation, the output is deformed. Even schizophrenic people are rational… Logic is never the problem, it’s always the training material.

    Anyway that’s why we had to build a mathematical field of statistics and create tools like sample sizes and distributions to generalize.

> whether AI can push to radicalize susceptible individuals

My guess is, not as the single and most prominent factor. Pauperisation, isolation of individual and blatant lake of homogeneous access to justice, health services and other basic of social net safety are far more likely going to weight significantly. Of course any tool that can help with mass propaganda will possibly worsen the likeliness to reach people in weakened situation which are more receptive to radicalization.

  • There's actually been fascinating discoveries on this. Post the mid 2010 ISIS attacks driven by social media radicalization in Western countries, the big social platforms (Meta, Google, etc) agreed to censor extremist islamist content - anything that promoted hate, violence, etc. By all accounts it worked very well, and homegrown terrorism plummeted. Access and platforms can really help promote radicalism and violence if not checked.

    • I don’t really find this surprising! If we can expect social networking to allow groups of like minded individuals to find eachother and collaborate on hobbies, businesses and other benign shared interests - it stands to reason that the same would apply to violent and other anti-state interests as well.

      The question that then follows is if suppressing that content worked so well, how much (and what kind of) other content was suppressed for being counter to the interests of the investors and administrators of these social networks?

Maybe it’s just a prank played on white expats here in UAE, but don’t all Arabic speakers say inshallah all the time?

  • English speakers frequently say “Jesus!” or “thank God” - it would be weird for an LLM.

    • TBH I wouldn't mind if my LLM threw in an "Inshallah" every now and again, it would remind me how skeptical I need to be in its output. (Not just "Inshallah" - same thing if it said "God willing")

    • Would be weird in an email, but not objectionable. The problem is the bias for one religion over the others.

Wow, I would never expect that. Do all models behave like this, or is it just Gemini? One particular model of Gemini?

  • Gemini is really odd in particular (even with reasoning). Chatgpt still uses a similar religion-influenced language but it's not as weird.

    • We were messing around at work last week building an AI agent that was supposed to only respond with JSON data. GPT and Sonnet more or less what we wanted, but Gemma insisted on giving us a Python code snippet.

      10 replies →

I usually use English to talk to Gemini, but the other day I wanted to try and find out the original band of a Siberian punk song that I have carried around in my music collection since time immemorial. Problem is the tags are all over the place in this genre and there are situations where "Foo-Bar" and "Foobar" are two completely different bands. Gemini was clearly trained on some genre forums from late 90s which are... shall I say non-PC by any stretch of the term.

In the middle of the conversation it randomly switched from English to Russian and clearly struggled to maintain the tone imposed by the built-in prompt.

I avoid talking to LLMs in my native tongue (French), they always talk to me with a very informal style and lots of emojis. I guess in English it would be equivalent to frat-bro talk.

Hasn't this already been observed with not too stable individuals? remember some story about kid asking ai if his parents/government etcs were spying on him.

Gemini loves to assume roles and follows them to the letter. It's funny and scary at times how well it preserves character for long contexts.

When I was a kid, I used to say "Ježíšmarjá" (literally "Jesus and Mary") a lot, despite being atheist growing up in communist Czechoslovakia. It was just a very common curse appearing in television and in the family, I guess.

> and can't help but think whether AI can push to radicalize susceptible individuals

What kind of things did it tell you ?

  • It told him "this is what our religion says we should do" without any kind of weird prompting, role-playing, or persona-shifting beyond using a different language. As a westerner, you may regard athiests with suspicion, or even contempt, but you've at least heard them speak publicly. From a culture where most haven't, hearing an authoritative voice which can perfectly cite support for any point it's making, how could it not have a huge potential for radicalization?

    • > this is what our religion says we should do"

      OK but do what exactly ? respect your parents ? kill all infidels ? context is missing ...

On Facebook, anti-abortionists are using ChatGPT to write long screeds about abortion, religion, murder and the law. The content attracts thousands of people and pushes them towards radicalized justifications, movements and actions based on appeals to faith.

I mean if it is citing the sources, there is only so much that can be done without altering original meaning.

  • The sources Gemini cites are usually something completely unrelated to its response. (Not like you're gonna go check anyways.)

  • An LLM citing sources is linking you to stuff that it recently found that kind-of matches its answers. I don't believe it is possible for an LLM to cite original training materials, and it wouldn't be desirable if those are unavailable to the end-user, anyway.

    This is an added nuisance for webmasters beyond automated AI-training scrapers. When users query an LLM like Grok or Gemini, it will go search a list of websites and "browse" them to glean information, and though that seems like a contradiction to what I just wrote, it is not "LLM" activity, not really "agentic", but sort of a smart proxy.

    Trust me.