← Back to context

Comment by mytailorisrich

10 days ago

I think this shows that LLMs do NOT 'understand' anything.

> I think this shows that LLMs do NOT 'understand' anything.

It shows these LLMs don't understand what's necessary for washing your car. But I don't see how that generalizes to "LLMs do NOT 'understand' anything".

What's your reasoning, there? Why does this show that LLMs don't understand anything at all?

I think this rather shows that GPT 5.2 Instant, which is the version that he most probably used as a free user, is shit and unsusable for anything.

  • Another/newer/less restricted LLM may give a better answer but I don't think we can conclude that it 'understands' anything still.

    • If it answers this out-of-distribution question correctly -- which the other major models do -- what else should we conclude, other than that a meaningful form of "understanding" is being exhibited?

      Do we need a new dictionary word that acts as a synonym for "understanding" specifically for non-human actors? I don't see why, personally, but I guess a case could be made.

      1 reply →