Comment by wat10000
14 hours ago
It's interesting how the Turing Test was pretty widely accepted as a way to evaluate machine intelligence, and then quietly abandoned pretty much instantly once machines were able to pass it. I don't even necessarily think that was incorrect, but it's interesting how rapidly views changed.
Dijkstra said, "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." Well, we have some very fish-y submarines these days. But the point still holds. Rather than worry about whether these things qualify as "intelligent," look at their actual capabilities. That's what matters.
Basically the only reasonably proposed Turing test is the one defined in the Kurzweil-Kapor wager[0] which has never been attempted.
[0]: https://en.wikipedia.org/wiki/Turing_test#Kurzweil%E2%80%93K...
As far as I know, we haven't done any proper Turing Tests for LLMs. And if we did, they would surely fail them.
Dude, you're in a Turing test right now. Conservatively, 10% of comments on this site are LLM output. We're all conversing with robots.
Nope, you are!
"Proper" may be doing some work here, but such a test was run last year and GPT-4.5 and LLaMa-3.1-405B both passed. Oddly, GPT-4.5 was judged as human significantly more often than chance. https://arxiv.org/abs/2503.23674
We will never prove AI is intelligent.
We will only prove humans aren't.
And the machine came into existence all on its own did it? Another absolutely stupid comment.
Do you people actually 'think' before posting, or, have you handed that off to LLMs entirely?