← Back to context

Comment by kgeist

1 day ago

Yesterday I was trying to make a small quantized model work, but it just refused to follow all my instructions. I tried to use all the tricks I could remember, but fixing instruction-following for one rule would always break another.

Then I had an idea: do I really want to be a "prompt engineer" and waste time on this, when the latest SOTA models probably already have knowledge of how to make good prompts in their training data?

Five minutes and a few back-and-forths with GPT-5 later, I had a working prompt that made the model follow all my instructions. I did it manually, but I'm sure you can automate this "prompt calibration" with two LLMs: a prompt rewriter and a judge in a loop.

Thats how copilot works by default. At least in IDE, it takes my prompt, makes it pretty and passes it further.