Comment by jaccola

21 days ago

But that’s not the Turing Test. The human who can be fooled in the Turing test was explicitly called the “interrogator”.

To pass the Turing test the AI would have to be indistinguishable from a human to the person interrogating it in a back and forth conversation. Simply being fooled by some generated content does not count (if it did, this was passed decades ago).

No LLM/AI system today can pass the Turing test.

I've encountered people who seem to understand properly how the test works, and still think that current LLM passes it easily.

Most of them come across to me like they would think ELIZA passes it, if they weren't told up front that they were testing ELIZA.

  • I think state of the art LLMs would pass the Turing test for 95% people if those people could (text) chat to them in a time before LLM chatbots became widespread.

    That is, the main thing that makes it possible to tell LLM bots apart from humans is that lots of us have over the past 3 years become highly attuned to specific foibles and text patterns which signal LLM generated text - much like how I can tell my close friends' writing apart by their use of vocabulary, punctuation, typical conversation topics, and evidence (or lack) of knowledge in certain domains.