Comment by antonvs
2 months ago
> It would require you to change the definition of reasoning
What matters here is a functional definition of reasoning: something that can be measured. A computer can reason if it can pass the same tests that humans can pass of reasoning ability. LLMs blew past that milestone quite a while back.
If you believe that "thinking" and "reasoning" have some sort of mystical aspect that's not captured by such tests, it's up to you to define that. But you'll quickly run into the limits of such claims, because if you want to attribute some non-functional properties to reasoning or thinking, that can't be measured, then you also can't prove that they exist. You quickly get into an intractable area of philosophy, which isn't really relevant to the question of what AI models can actually do, which is what matters.
> it does behave very much like a curve fitting search algorithm.
This is just silly. I can have an hours-long coding session with an LLM in which it exhibits a strong functional understanding of the codebase its working on, a strong grasp of the programming language and tools its working with, and writes hundreds or thousands of lines of working code.
Please plot the curve that it's fitting in a case like this.
If you really want to stick to this claim, then you also have to acknowledge that what humans do is also "behave very much like a curve fitting search algorithm." If you disagree, please explain the functional difference.
No comments yet
Contribute on Hacker News ↗