Comment by naasking
4 days ago
It too isn't rigourously defined. We're very much at the hand-waving "I know it when I see it" [1] stage for all of these terms.
4 days ago
It too isn't rigourously defined. We're very much at the hand-waving "I know it when I see it" [1] stage for all of these terms.
I can't speak for academic rigor, but it is very clear and specific from my understanding at least. Reasoning, simply put is the ability to come to a conclusion after analyzing information using a logic-derived deterministic algorithm.
* Humans are not deterministic.
* Humans that make mistakes are still considered to be reasoning.
* Deterministic algorithms have limitations, like Goedel incompleteness, which humans seem able to overcome, so presumably, we expect reasoning to also be able to overcome such challenges.
1) I didn't say we were, but when someone is called reasonable or acting with reason, then that implies deterministic/algorithmic thinking. When we're not deterministic, we're not reasonable.
2) Yes, to reason does imply to be infallible. The deterministic algorithms we follow are usually flawed.
3) I can't speak much to that, but I speculate that if "AI" can do reasoning, it would be a much more complex construct that uses LLMs (among other tools) as tools and variables like we do.