Comment by dkersten
1 day ago
Here is how GPT self-described LLM reasoning when I asked about it:
- LLMs don’t “reason” in the symbolic, step‑by‑step sense that humans or logic engines do. They don’t manipulate abstract symbols with guaranteed consistency.
- What they do have is a statistical prior over reasoning traces: they’ve seen millions of examples of humans doing step‑by‑step reasoning (math proofs, code walkthroughs, planning text, etc.).
- So when you ask them to “think step by step,” they’re not deriving logic — they’re imitating the distribution of reasoning traces they’ve seen.
This means:
- They can often simulate reasoning well enough to be useful.
- But they’re not guaranteed to be correct or consistent.
That at least sounds consistent with what I’ve been trying to say and what I’ve observed.
No comments yet
Contribute on Hacker News ↗