Comment by danielhanchen

2 days ago

Yes you're correct!

Very good question on SFT vs GRPO!

Assume the dataset I have is "What is 2+2?", "The answer is 4".

1. If you have very high quality labelled data, SFT should work fine. Ie "What is 2+2? Let me think about it....., The Answer is 4"

2. If you only have the input "What is 2+2", and just the answer "4", but nothing in between, GRPO could be very helpful! GRPO can help produce the reasoning traces automatically - you will need to provide some scoring / reward functions though. For example if the answer == 4, + 1 score.

3. You can combine SFT and GRPO! Do SFT first, then GRPO - this actually makes GRPO most likely converge faster!

Does this mean that you can only do GRPO on the training models that have reasoning traces in <think>...</think>

  • Oh no at all!! You can actually convert a model to even generate the <think>...</think> tokens themselves! That's how DeepSeek trained R1 Zero, which essentially made the model have reasoning skills!

    • Wont you have to use a distilled DeepThink model then ? Because the training phase with GRPO required to its reasoning within <think></think> for least loss.

      1 reply →

  • Nah, you can just request that in your prompt and then fail answers that are incorrect and/or don't include the think trace

can you give some real-world examples for when this would be useful? Does this work for tasks requiring tool calling as well?