Comment by ShowalkKama
11 hours ago
the fact that more tokens = more smart should be expected given cot / thinking / other techniques that increase the model accuracy by using more tokens.
Did you test that ""caveman mode"" has similar performance to the ""normal"" model?
Yes but: If the amount is fixed, then the density matters.
A lot of communication is just mentioning the concepts.
That is part of it. They are also trained to think in very well mapped areas of their model. All the RHLF, etc. tuned on their CoT and user feedback of responses.