← Back to context

Comment by nlpnerd

3 months ago

I have always believed that Chain of Thought basically acts as a form of regularization. LLMs are fundamentally next token predictors without any form or notion of logic, reasoning etc and is as likely (as a probabilistic model) to produce a creative response as something based on facts/principles (or anything that resembles "reasoning").

Asking the LLM to think step by step simply biases it towards the latter. It's still a stochastic parrot but now it sounds logical and that happens to be useful in some cases, regardless of whether we can agree if it's "reasoning".