Comment by rmunn
1 day ago
I've had too many LLMs tell me that software product ABC can do XYZ, but when I actually read the ABC documentation I discover that that hallucination was the opposite of reality: the docs say "we cannot do XYZ yet but we're working on it." So for me, the question at the back of my mind when I encounter an obviously LLM-generated article is always, "So which parts of this article are factually correct, and which parts are hallucinations?" I care less about the "human voice" aspect than about the factual correctness of the technical facts presented in the article.
In this particular case, if the facts about how many years ago various products came out are wrong, it doesn't matter since I'm never going to be relying on that fact anyway. The fact that what the author is proposing isn't ASCII, it's UTF-8-encoded Unicode (emojis aren't ASCII) doesn't matter (and I rather suspect that this particular factual error would have been present even if he had written the text entirely by hand with no LLM input), because again, I'm not going to be relying on that fact for anything. The idea he presents is interesting, and is obviously possible.
So I care less about the "voice" of an article, but a LOT about its accuracy.
I should add that for me, when it comes to LLMs telling me "facts" that are the opposite of reality, "too many" equals ONE or more.
This is an ongoing problem for those of us who use LLMs every day. I have to check and recheck what it claims is possible.