← Back to context

Comment by roboboffin

1 year ago

I'm not sure that's true at all. There are several well known researchers that say LLMs are in fact not doing reasoning.

Those are all the people that have not yet decoupled "reasoning" from "consciousness" in their own way of thinking. It's admittedly hyperbolic to say "everyone". I love hyperbole on HN. :)

  • For example, papers like this call into question whether or not a LLM can plan:

    https://arxiv.org/html/2409.13373v1

    This is a basic form of reasoning, to plan out the steps needed to execute something.

    • Planning, by definition, takes multiple reasoning steps. A single LLM inference is a fundamental single reasoning step, but it's a reasoning step nonetheless.

      It's like I'm saying a house is made of bricks. You can build a house of any shape out of bricks. But once bricks have been invented you can build houses. The LLM "reasoning" that even existed as early as GPT3.5 was the "brick" with which highly intelligent agents can be built out of, with no further "breakthroughs" being required.

      The basic Transformer Architecture was enough and already has the magical ingredient of reasoning. The rest is just a matter of prompt engineering.

      5 replies →