Comment by Simulacra

1 year ago

There's definitely human intervention in the model. Gemini is not true AI, it has too much human intervention in its results.

You're speaking as if LLMs are some naturally occurring phenomena that people are Google have tampered with. There's obviously always human intervention as AI systems are built by humans.

  • It's pretty clear to me what the commenter means even if they don't use the words you like/expect.

    The model is built by machine from a massive set of data. Humans at Google may not like the output of a particular model due to their particular sensibilities, so they try to "tune it" and "filter both input/output" to limit of what others can do with the model to Google's sensibilities.

    Google stated as much in their announcement recently. Their whole announcement was filled with words like "responsibility", "safety", etc., alluding to a lot of censorship going on.

    • Censorship of what? You object to Google applying its own bias (toward avoiding offensive outcomes) but you're fine with the biases inherent to the dataset.

      There is nothing the slightest bit objective about anything that goes into an LLM.

      Any product from any corporation is going to be built with its own interests in mind. That you see this through a political lens ("censorship") only reveals your own bias.

      1 reply →

None of it is “true” AI, because none of this is intelligent. It’s simply all autocomplete/random pixel generation that’s been told “complete x to y words”. I agree though, Gemini (and even ChatGPT) are both rather weak compared to what they could be if the “guard”rails were not so disruptive to the output.

What’s the definition of “true AI”? Surely all AI has human intervention in its results since it was trained on things made by humans.