← Back to context Comment by b40d-48b2-979e 9 hours ago LLMs don't "reason". 6 comments b40d-48b2-979e Reply thot_experiment 9 hours ago Why is this a meaningful distinction to you? What does "reason" mean here? Can we construct a test that cleanly splits what humans do from what LLMs do? grey-area 9 hours ago Sure, things like counting the ‘r’s in strawberry, for example (till they are retrained not to make that mistake). thot_experiment 8 hours ago There are humans that can't do that but are clearly capable of reasoning. Not a meaningful categorical split. 2 replies → bensyverson 9 hours ago Take it up with OpenAI's API designers—it's their term
thot_experiment 9 hours ago Why is this a meaningful distinction to you? What does "reason" mean here? Can we construct a test that cleanly splits what humans do from what LLMs do? grey-area 9 hours ago Sure, things like counting the ‘r’s in strawberry, for example (till they are retrained not to make that mistake). thot_experiment 8 hours ago There are humans that can't do that but are clearly capable of reasoning. Not a meaningful categorical split. 2 replies →
grey-area 9 hours ago Sure, things like counting the ‘r’s in strawberry, for example (till they are retrained not to make that mistake). thot_experiment 8 hours ago There are humans that can't do that but are clearly capable of reasoning. Not a meaningful categorical split. 2 replies →
thot_experiment 8 hours ago There are humans that can't do that but are clearly capable of reasoning. Not a meaningful categorical split. 2 replies →
Why is this a meaningful distinction to you? What does "reason" mean here? Can we construct a test that cleanly splits what humans do from what LLMs do?
Sure, things like counting the ‘r’s in strawberry, for example (till they are retrained not to make that mistake).
There are humans that can't do that but are clearly capable of reasoning. Not a meaningful categorical split.
2 replies →
Take it up with OpenAI's API designers—it's their term