← Back to context

Comment by roboboffin

1 year ago

For example, papers like this call into question whether or not a LLM can plan:

https://arxiv.org/html/2409.13373v1

This is a basic form of reasoning, to plan out the steps needed to execute something.

Planning, by definition, takes multiple reasoning steps. A single LLM inference is a fundamental single reasoning step, but it's a reasoning step nonetheless.

It's like I'm saying a house is made of bricks. You can build a house of any shape out of bricks. But once bricks have been invented you can build houses. The LLM "reasoning" that even existed as early as GPT3.5 was the "brick" with which highly intelligent agents can be built out of, with no further "breakthroughs" being required.

The basic Transformer Architecture was enough and already has the magical ingredient of reasoning. The rest is just a matter of prompt engineering.

  • It’s not reasoning, it retrieval of a pattern, and that pattern may contain reasoning.

    The prompt engineering is the real reasoning, provided by the human.

    • Yeah, these kinds of discussions always devolve purely into debates about what's the proper definition of words. Especially on HN where everyone has their "Pedantic Knob" dialed up to 11.

      3 replies →