Comment by frumiousirc

20 hours ago

Yours matches my own experience and work habits.

My mental model is that LLMs are obedient but lazy. The laziness shows in the output matching the letter of the prompt but with as high "code entropy" as possible.

What I mean by "code entropy" is, for example, copy-paste-tweak (high entropy) is always easier (on the short term) for LLMs (and humans) to output than defining a function to hold concepts common across the pastes with the "tweak" represented by function arguments.

LLMs will produce high entropy output unless constrained to produce lower entropy ("better") code.

Until/unless LLMs are trained to actually apply craft learned by experienced humans, we must be explicit in our prompts.

For example, I get good results from say Claude Sonnnet when my instruction include:

- Statements of specific file, class, function names to use.

- Explicit design patterns to apply. ("loop over the outer product of lists of choices for each category")

- Implementation hints ("use itertools.product() to iterate over the combinations")

- And, "ask questions if you are uncertain" helps trigger an iteration to quickly clarify something instead of fixing the resulting code.

This specificity makes prompting a lot more work but it pays off. I only go this far when I care about the resulting code. And, I still often "retouch" as you also describe.

OTOH, when I'm vibing I'll just give end goals and let the slop flow.