Comment by charcircuit
17 days ago
It's not just force inserting a word. Reasoning is integrated into the training process of the model.
17 days ago
It's not just force inserting a word. Reasoning is integrated into the training process of the model.
Not the core foundation model. The foundation model still only predicts the next token in a static way. The reasoning is tacked onto the instructGPT style finetuning step and its done through prompt engineering. Which is the shittiest way a model like this could have been done, and it shows