← Back to context

Comment by jacob019

6 days ago

That's a very dangerous thought. Prompt engineering evolved is just clear and direct communication. That's a hard thing to get right when talking to people. Heck, personally I can have a hard time with clear and coherent internal dialog. When I am working with models and encounter unexpected results, it often boils down to the model giving me what I asked for instead of what I want. I've never met anyone who always knows exactly what they want and is able to articulate it with perfect clarity. Some of the models are surprisingly good at figuring out intent, but complexity inevitably requires additional context. Whether you are working with a model or a person, or even your future self, you must spend time developing and articulating clear specifications, that is prompt engineering. Furthermore, models don't "think" like people--there's technique in how you struture specifications for optimal results.

Fair enough. I guess I was mainly thinking of how rarely I need to utilise the old prompt engineering techniques. Stuff like: "You are an expert software developer...", "you must do this or people will die." etc

I just tell the AI what I want, with sufficient context. Then, I check the reasoning trace to check it understood what I wanted. You need to be clear in your prompts, sure, but I don't really see it as "prompt engineering" any more.