Comment by dr_dshiv
2 months ago
So thinking 5x faster, but lower quality (for now).
Anyone have experience or data on how lower model quality during thinking affects the performance of a higher quality model output? Like, is it worthwhile having lots of lower quality thinking that is then used by a higher quality model?
No comments yet
Contribute on Hacker News ↗