Comment by SpicyLemonZest

14 hours ago

I try to avoid calling LLMs intelligent when unnecessary, but it runs into the fundamental problem that they are intelligent by any common-sense definition of the term. The only way to defend the thesis that they aren't is to retreat to esoteric post-2022 definitions of intelligence, which take into account this new phenomenon of a machine that can engage in medium-quality discussions on any topic under the sun but can't count reliably.

I don't have a WSJ subscription, but other coverage of this story (https://www.theguardian.com/technology/2026/mar/04/gemini-ch...) makes it clear that Gemini's intelligence was precisely the problem in this case; a less intelligent chatbot would not have been able to create the detailed, immersive narrative the victim got trapped in.

It's interesting how the Turing Test was pretty widely accepted as a way to evaluate machine intelligence, and then quietly abandoned pretty much instantly once machines were able to pass it. I don't even necessarily think that was incorrect, but it's interesting how rapidly views changed.

Dijkstra said, "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." Well, we have some very fish-y submarines these days. But the point still holds. Rather than worry about whether these things qualify as "intelligent," look at their actual capabilities. That's what matters.