Comment by quantadev

1 year ago

Planning, by definition, takes multiple reasoning steps. A single LLM inference is a fundamental single reasoning step, but it's a reasoning step nonetheless.

It's like I'm saying a house is made of bricks. You can build a house of any shape out of bricks. But once bricks have been invented you can build houses. The LLM "reasoning" that even existed as early as GPT3.5 was the "brick" with which highly intelligent agents can be built out of, with no further "breakthroughs" being required.

The basic Transformer Architecture was enough and already has the magical ingredient of reasoning. The rest is just a matter of prompt engineering.

It’s not reasoning, it retrieval of a pattern, and that pattern may contain reasoning.

The prompt engineering is the real reasoning, provided by the human.

  • Yeah, these kinds of discussions always devolve purely into debates about what's the proper definition of words. Especially on HN where everyone has their "Pedantic Knob" dialed up to 11.

    • I understand your point. I apologise, if I am coming across pendantic.

      My point is computers already follow algorithms, and algorithms contain reasoning; but the computers are not reasoning themselves. At least, not yet!

      2 replies →