Comment by BoorishBears

3 months ago

I like that actually, I've spent the last year probably 60:40 between post-training and prompt engineering/witch doctoring (the two go together more than most people realize)

Some of it is engineering-like, but I've also picked up a sixth sense when modifying prompts about what parts are affecting the behavior I want to modify for certain models, and that feels very witch doctory!

The more engineering-like part is essentially trying to RE a black box model's post-training, but that goes over some people's heads so I'm happy to help keep the "it's just voodoo and guessing" narrative going instead :)

I think the coherence behind prompt engineering is not in the literal meanings of the words but finding the vocabulary used by the sources that have your solution. Ask questions like a high school math student and you get elementary words back. Ask questions in the lingo of a Linux bigot and you will get good awk scripts back. Use academic maths language and arXiv answers will be produced.