Comment by andrewmcwatters
2 months ago
I think that's supposed to be the idea of reasoning functionality, but in practice, it just seems to allow responses to continue longer than that would have otherwise by bisecting the output into warming an output and then using maybe what we would consider cached tokens to assist with further contextual lookups.
That is to say, you can obtain the same process by talking to "non-reasoning" models.
No comments yet
Contribute on Hacker News ↗