Comment by alwillis
11 hours ago
> I'm hearing anecdotes from all over about devs pushing LLM-generated code changes into production without retaining any knowledge of what it is they're pushing. The changes compound, their understanding of the codebase diminishes, and so the actions become riskier.
I don’t think so.
An LLM can produce higher-quality documentation than most humans. If it's not already happening, when a new developer joins a team, they're going to have an LLM produce any documentation a new developer needs, including why certain decisions were made.
It could also summarize years of email threads and code reviews that, let's face it, a new person wouldn’t be able to ingest anyway; it's not like a new developer gets to take a week off to get caught up on everything that happened before they got there. English not their first language? Well, the LLM can present the information in virtually any language required.
As the models continue to improve, they'll spot patterns in the code that a human wouldn’t be able to see.
> An LLM can produce higher-quality documentation than most humans.
Can bears some heavy weight.
LLM generated documentation has so low level of information density, that it’s useless. Yes, it writes nice sentences… or even writes. But it contains so much noise that currently, reading code is a better documentation than what I’ve seen from every single LLM generated documentation.
The same with LLM generated articles. I close them after the second sentence because at least about 90% of it is useless filler.
Now compare that to this: https://slate.com/technology/2004/11/the-death-of-the-last-m...
I almost closed it when I read the first few sentences because these kinds of articles are useless time wasting nonsenses. But this was different. This was old. Most sentences contained something new. Something worthy. (Of course, people also write unnecessary long articles… looking at you Atlantic)
You can throw out almost everything by volume from LLM generated documentation without loosing any information.
Currently, if I smell (and it’s very easy to smell) LLM generated documentation or article, then I close it immediately, because it’s good for only one thing: wasting my time, for no good reason.
> LLM generated documentation has so low level of information density, that it’s useless. Yes, it writes nice sentences… or even writes. But it contains so much noise that currently, reading code is a better documentation than what I’ve seen from every single LLM generated documentation.
I should clarify: the documentation I’m talking about is not generated using a generic LLM prompt, which would mostly suck.
With the proper context and additions (skills, plugins, MCPs) LLMs can produce high-quality documentation. You'd also have subagents doing QA of the documentation.
But it does require effort; it’s not magic.
It's not just about documentation.
If stuff really goes wrong, you need people who deeply understand the codebase so that they know where to look and how to diagnose the issue. It might be the case in the future that LLMs become so powerful they'll diagnose any issue (I doubt it), but until then, we need people in the loop.