Comment by zahlman

19 days ago

I've encountered people who seem to understand properly how the test works, and still think that current LLM passes it easily.

Most of them come across to me like they would think ELIZA passes it, if they weren't told up front that they were testing ELIZA.

I think state of the art LLMs would pass the Turing test for 95% people if those people could (text) chat to them in a time before LLM chatbots became widespread.

That is, the main thing that makes it possible to tell LLM bots apart from humans is that lots of us have over the past 3 years become highly attuned to specific foibles and text patterns which signal LLM generated text - much like how I can tell my close friends' writing apart by their use of vocabulary, punctuation, typical conversation topics, and evidence (or lack) of knowledge in certain domains.