Comment by gassi
10 hours ago
> Why does it matter?
Because AI gets things wrong, often, in ways that can be very difficult to catch. By their very nature LLMs write text that sounds plausible enough to bypass manual review (see https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...), so some find it best to avoid using it at all when writing documentation.
But all those "it's AI posts" are about the prose and "style", not the actual content. So even if (and that is a big if) the text was written using the help of AI (and there are many valid reasons to use it, e.g. if you're not a native speaker) that does not mean the content was written from AI and thus contains AI mistakes.
If it was so obviously written by AI then finding those mistakes should be easy?
The style is the easiest thing to catch for people; GP has said that the technical issues can be more difficult to find, especially in longer texts; there are times where it indeed are caught.
Passing even correct information through an LLM may or may not taint it; it may create sentences which on first glance are similar, but may have different, imprecise meaning - specific wording may be crucial in some cases. So if the style is under question, the content is as well. And if you can write the technically correct text at first, why would you put it through another step?
Humans get things wrong too.
Quality prose usually only becomes that after many reviews.
AI tools make different types of mistakes than humans, and that's a problem. We've spent eons creating systems to mitigate and correct human mistakes, which we don't have for the more subtle types of mistakes AI tends to make.
AI gets things wrong ("hallucinates") much more often than actual subject matter experts. This is disingenuous.
Presumably the "subject matter expert" will review the output of the LLM, just like a reviewer. I think it's disingenuous to assume that just because someone used AI they didn't look at or reviewed the output.
1 reply →
Fortunately, we can't just get rid of humans (right?) so we have to use them _somehow_
If AI is used by “fire and forget”, sure - there’s a good chance of slop.
But if you carefully review and iterate the contributions of your writers - human or otherwise - you get a quality outcome.
Absolutely.
But why would you trust the author to have done that when they are lying in a very obvious way about not using AI?
Using AI is fine, it's a tool, it's not bad per se. But claiming very loud you didn't use that tool when it's obvious you did is very off-putting.
That’s fine. Write it out yourself and then ask an AI how it could be improved with a diff. Now you’ve given it double human review (once in creation then again reviewing the diff) and single AI review.
That's one review with several steps and some AI assistance. Checking your work twice is not equivalent to it having it reviewed by two people, part of reviewing your work (or the work of others) is checking multiple times and taking advantage of whatever tools are at your disposal.