← Back to context

Comment by sidkshatriya

2 days ago

Wont you have to use a distilled DeepThink model then ? Because the training phase with GRPO required to its reasoning within <think></think> for least loss.

Oh no no!! The trick for GRPO is you essentially let the model "learn" how to do reasoning itself!!!

The <think> tokens are optional for formatting reasons. You could use <reasoning> or <thinking> or [reasoning] for example in the system prompt.