← Back to context

Comment by ianbicking

10 months ago

"It might feel like your applications’ prompts or prompt templates are a good form of differentiation. After all, your top-notch engineering team has invested days into tuning them to have the right response characteristics, tone, and output style. Of course, giving your competitors your prompts would probably accelerate their progress, but any good engineering team will figure out the right changes quickly. The main reason is that the experimentation (with the right evaluation data!) is quick and easy — trying a new prompt template isn’t much harder than writing it out. All it really takes is a little bit of patience, some creativity, and extra Azure OpenAI credits."

And yet, over and over, I see products with output that clearly comes from simple and frankly lazy prompting. You can do a lot with prompting, but engineerings are not putting in the work! (If an engineer is even the right person... probably not, given any specific application of an LLM.)

Prompting also isn't so reductive that you just write an evaluation and then iterate on the prompt until you satisfy the evaluation. Prompting is a co-creative exercise between the LLM, the domain expert, the product, and the user. And sure "data" fits in there, as well as relationships, comprehensibility, workflows, etc etc... the AI component is just a small piece of any full application.