← Back to context

Comment by sidkshatriya

2 days ago

Does this mean that you can only do GRPO on the training models that have reasoning traces in <think>...</think>

Oh no at all!! You can actually convert a model to even generate the <think>...</think> tokens themselves! That's how DeepSeek trained R1 Zero, which essentially made the model have reasoning skills!

  • Wont you have to use a distilled DeepThink model then ? Because the training phase with GRPO required to its reasoning within <think></think> for least loss.

    • Oh no no!! The trick for GRPO is you essentially let the model "learn" how to do reasoning itself!!!

      The <think> tokens are optional for formatting reasons. You could use <reasoning> or <thinking> or [reasoning] for example in the system prompt.

Nah, you can just request that in your prompt and then fail answers that are incorrect and/or don't include the think trace