← Back to context

Comment by chrisjj

23 days ago

> seem to struggle a lot with critical thinking.

It is an illusion arising from anthropomorphisation. They aren't thinking at all. They are just parotting the output of thinking that has long gone.

This feels too strong IMO.

Just focusing on the outputs we can observe, LLMs clearly seem to be able to "think" correctly on some small problems that feel generalized from examples its been trained on (as opposed to pure regurgitation).

Objecting to this on some kind of philosophical grounds of "being able to generalize from existing patterns isn't the same as thinking" feels like a distinction without a difference. If LLMs were better at solving complex problems I would absolutely describe what they're doing as "thinking". They just aren't, in practice.

  • > Just focusing on the outputs we can observe, LLMs clearly seem to be able to "think" correctly on some small problems that feel generalized from examples its been trained on (as opposed to pure regurgitation).

    "Seem". "Feel". That's the anthropomorphisation at work again.

    These chatbots are called Large Language Models for a reason. Language is mere text, not thought.

    If their sellers could get away with calling them Large Thought Models, they would. They can't, because these chatbots do not think.

    • > "Seem". "Feel". That's the anthropomorphisation at work again.

      Those are descriptions of my thoughts. So no, not anthropomorphisation, unless you think I'm a bot.

      > These chatbots are called Large Language Models for a reason. Language is mere text, not thought. If their sellers could get away with calling them Large Thought Models, they would. They can't, because these chatbots do not think.

      They use the term "thinking" all the time.

      ----

      I'm more than willing to listen to an argument that what LLMs are doing should not be considered thought, but "it doesn't have 'thought' in the name" ain't it.

      3 replies →