Comment by jaccola
4 hours ago
But that’s not the Turing Test. The human who can be fooled in the Turing test was explicitly called the “interrogator”.
To pass the Turing test the AI would have to be indistinguishable from a human to the person interrogating it in a back and forth conversation. Simply being fooled by some generated content does not count (if it did, this was passed decades ago).
No LLM/AI system today can pass the Turing test.
I've encountered people who seem to understand properly how the test works, and still think that current LLM passes it easily.
Most of them come across to me like they would think ELIZA passes it, if they weren't told up front that they were testing ELIZA.