← Back to context

Comment by gassi

5 hours ago

> Why does it matter?

Because AI gets things wrong, often, in ways that can be very difficult to catch. By their very nature LLMs write text that sounds plausible enough to bypass manual review (see https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...), so some find it best to avoid using it at all when writing documentation.

But all those "it's AI posts" are about the prose and "style", not the actual content. So even if (and that is a big if) the text was written using the help of AI (and there are many valid reasons to use it, e.g. if you're not a native speaker) that does not mean the content was written from AI and thus contains AI mistakes.

If it was so obviously written by AI then finding those mistakes should be easy?

  • The style is the easiest thing to catch for people; GP has said that the technical issues can be more difficult to find, especially in longer texts; there are times where it indeed are caught.

    Passing even correct information through an LLM may or may not taint it; it may create sentences which on first glance are similar, but may have different, imprecise meaning - specific wording may be crucial in some cases. So if the style is under question, the content is as well. And if you can write the technically correct text at first, why would you put it through another step?

Humans get things wrong too.

Quality prose usually only becomes that after many reviews.

  • AI tools make different types of mistakes than humans, and that's a problem. We've spent eons creating systems to mitigate and correct human mistakes, which we don't have for the more subtle types of mistakes AI tends to make.

  • AI gets things wrong ("hallucinates") much more often than actual subject matter experts. This is disingenuous.

  • That’s fine. Write it out yourself and then ask an AI how it could be improved with a diff. Now you’ve given it double human review (once in creation then again reviewing the diff) and single AI review.