Comment by ruszki
6 days ago
The exact people who said to me that I’m using LLMs wrongly, and showed their code, showed me bad code. So let’s say, that “more documentation, which is unnecessary most time, ie noise, more test cases, which test not necessarily what they should, and an NLP interface which lies from time-to-time”, and we agree. LLM generated code is noisy as hell, for no good reason. Maybe, it’s good for you, and your type of work. I need to provide way better code than that. I don’t know why we pretend that “good code”, “good documentation”, “good tests” etc are the same for everybody.
>LLM generated code is noisy as hell, for no good reason
You can direct it to generate code/docs in whatever format or structure you want, prioritising the good practices and avoiding bad practices, and then manually edit as needed
For example with documentation I direct it to:
*Goal:* Any code you generate must lower cognitive load and be backed by accurate, minimal, and maintainable documentation
1. *Different docs answer different questions* — don’t duplicate; *link* instead.
2. *Explain _why_, not just what.* Comments carry rationale, invariants, and tradeoffs.
3. *Accurate or absent.* If you can’t keep a doc truthful, remove it and add a TODO + owner.
4. *Progressive disclosure.* One‑screen summaries first; details behind links/sections.
5. *Examples beat prose.* Provide minimal, runnable examples close to the API.
6. *Consistency > cleverness.* Uniform structure, tone, and placement.
I also give it a note to refuse the prompt if it cannot satisfy these conditions
>I don’t know why we pretend that “good code”, “good documentation”, “good tests” etc are the same for everybody
Of course code, docs, tests are all subjective and maybe even closer to an art than a science
But there's also objectively good habits, and objectively bad habits, and you can steer an LLM pretty well