Comment by jug
6 months ago
I was just thinking - since GPT-4o and Sonnet are closed models, do we know that this method was not already used to train them? And that Reflection is simply finding a path for greater improvements than they did. Llama 3.1 apparently didn't improve as much. It's just a thought though.
If they had, this thing wouldn't be trading punches with them at its size
Sonnet does something like this. See - https://tyingshoelaces.com/blog/forensic-analysis-sonnet-pro...
What parameter size are 4o and sonnet?