Comment by sheepscreek
10 hours ago
I get much better output from o1* models when I dump a lot of context + leave a detailed but tightly scoped prompt with minimal ambiguity. Sometimes I even add - don’t assume, ask me if you are unsure. What I get back is usually very very high quality. To the point that I feel my 95th percentile coding skills have diminishing returns. I find that I am more productive researching and thinking about the what and leaving the how (implementation details) to the model - nudging it along.
One last thing, anecdotally - I find that it’s often better to start a new chat after implementing a chunky bit/functionality.
No comments yet
Contribute on Hacker News ↗