← Back to context

Comment by sheepscreek

2 months ago

I get much better output from o1* models when I dump a lot of context + leave a detailed but tightly scoped prompt with minimal ambiguity. Sometimes I even add - don’t assume, ask me if you are unsure. What I get back is usually very very high quality. To the point that I feel my 95th percentile coding skills have diminishing returns. I find that I am more productive researching and thinking about the what and leaving the how (implementation details) to the model - nudging it along.

One last thing, anecdotally - I find that it’s often better to start a new chat after implementing a chunky bit/functionality.

Yes, I've tried out both: ordering it to ask me questions upfront, and sometimes restarting with an edited 'report' and a prototype implementation for a 'clean start'. It feels like it sometimes helps... but I have no numbers or rigorous evidence on that.