← Back to context

Comment by wat10000

6 days ago

LLMs are really good with words and kind of crap at “thinking.” Humans are wired to see these two things as tightly connected. A machine that thinks poorly and talks great is inherently confusing. A lot of discussion and disputes around LLMs comes down to this.

It wasn’t that long ago that the Turing Test was seen as the gold standard of whether a machine was actually intelligent. LLMs blew past that benchmark a year or two ago and people barely noticed. This might be moving the goalposts, but I see it as a realization that thought and language are less inherently connected than we thought.

So yeah, the fact that they even do this well is pretty amazing, but they sound like they should be doing so much better.

> LLMs are really good with words and kind of crap at “thinking.” Humans are wired to see these two things as tightly connected. A machine that thinks poorly and talks great is inherently confusing. A lot of discussion and disputes around LLMs comes down to this.

It's not an unfamiliar phenomenon in humans. Look at Malcolm Gladwell.