Comment by Springtime

4 days ago

Ars Technica being caught using LLMs that hallucinated quotes by the author and then publishing them in their coverage about this is quite ironic here.

Even on a forum where I saw the original article by this author posted someone used an LLM to summarize the piece without having read it fully themselves.

How many levels of outsourcing thinking is occurring to where it becomes a game of telephone.

Also ironic: When the same professionals advocating "don't look at the code anymore" and "it's just the next level of abstraction" respond with outrage to a journalist giving them an unchecked article.

Read through the comments here and mentally replace "journalist" with "developer" and wonder about the standards and expectations in play.

Food for thought on whether the users who rely on our software might feel similarly.

There's many places to take this line of thinking to, e.g. one argument would be "well, we pay journalists precisely because we expect them to check" or "in engineering we have test-suites and can test deterministically", but I'm not sure if any of them hold up. The "the market pays for the checking" might also be true for developers reviewing AI code at some point, and those test-suites increasingly get vibed and only checked empirically, too.

Super interesting to compare.

  • - There’s a difference. Users don’t see code, only its output. Writing is “the output”.

    - A rough equivalent here would be Windows shipping an update that bricks your PC or one of its basic features, which draws plenty of outrage. In both cases, the vendor shipped a critical flaw to production: factual correctness is crucial in journalism, and a quote is one of the worst things to get factually incorrect because it’s so unambiguous (inexcusable) and misrepresents who’s quoted (personal).

    I’m 100% ok with journalists using AI as long as their articles are good, which at minimum requires factual correctness and not vacuous. Likewise, I’m 100% ok with developers using AI as long as their programs are good, which at minimum requires decent UX and no major bugs.

    • > - There’s a difference. Users don’t see code, only its output. Writing is “the output”.

      So how is the "output" checked then? Part of the assumption of the necessity of code review in the first place is that we can't actually empirically test everything we need to. If the software will programmatically delete the entire database next Wednesday, there is no way to test for that in advance. You would have to see it in the code.

    • Tbf I'm fine with it only one way around; if a journalist has tonnes of notes and data on a subject and wants help to condense those down into an article, assistance with prioritising which bits of information to present to the reader then totally fine.

      If a journalist has little information and uses an llm to make "something from nothing" that's when I take issue because like, what's the point?

      Same thing as when I see managers dumping giant "Let's go team!!! 11" messages splattered with AI emoji diarrhea like sprinkles on brown frosting. I ain't reading that shit; could've been a one liner.

      1 reply →

  • Excellent observation. I get so frustrated every time I hear the "we have test-suites and can test deterministically" argument. Have we learned absolutely nothing from the last 40 years of computer science? Testing does not prove the absence of bugs.

  • I look forward to a day when the internet is so uniformly fraudulent that we can set it aside and return to the physical plane.

    • I don't know if I look forward to it, myself, but yeah: I can imagine a future where in person interactions become preferred again because at least you trust the other person is human. Until that also stops being true, I guess.

      1 reply →

    • Well, I can tell you I've been reading a lot more books now. Ones published before the 2020s, or if recent, written by authors who were well established before then.

      1 reply →

    • Because nobody ever lied in print media or in person?

      What we are seeing is the consequence of a formerly high trust society collapsing into a low trust one. There is no place to hide from that. The Internet is made of the same stuff as print media and in person. It’s made of people.

      The internet didn’t cause this. It just reflects it.

      The LLMs are made of people too inasmuch as that’s where they get their training data and prompts.

  • > When the same professionals advocating "don't look at the code anymore" and "it's just the next level of abstraction" respond with outrage to a journalist giving them an unchecked article.

    I would expect there is literally zero overlap between the "professionals"[1] who say "don't look at the code" and the ones criticising the "journalists"[2]. The former group tend to be maximalists and would likely cheer on the usage of LLMs to replace the work of the latter group, consequences be damned.

    [1] The people that say this are not professional software developers, by the way. I still have not seen a single case of any vibe coder who makes useful software suitable for deployment at scale. If they make money, it is by grifting and acting as an "AI influencer", for instance Yegge shilling his memecoin for hundreds of thousands of dollars before it was rugpulled.

    [2] Somebody who prompts an LLM to produce an article and does not even so much as fact-check the quotations it produces can clearly not be described as a journalist, either.

  • > When the same professionals advocating "don't look at the code anymore" and "it's just the next level of abstraction" respond with outrage to a journalist giving them an unchecked article.

    I doubt, by and large, that it's the same people. Just as this LLM misquoting is journalistic malpractice, "don't look at the code anymore" is engineering malpractice.

  • While I don't subscribe to the idea that you shouldn't look at the code - it's a lot more plausible for devs because you do actually have ways to validate the code without looking at it.

    E.g you technically don't need to look at the code if it's frontend code and part of the product is a e2e test which produces a video of the correct/full behavior via playwright or similar.

    Same with backend implementations which have instrumentation which expose enough tracing information to determine if the expected modules were encountered etc

    I wouldn't want to work with coworkers which actually think that's a good idea though

  • I’ve been saying the same kind of thing (and I have been far from alone), for years, about dependaholism.

    Nothing new here, in software. What is new, is that AI is allowing dependency hell to be experienced by many other vocations.

  • I haven't seen a single person advocate not looking at the code.

    I'm sure that person exists but they're not representative of HN as a whole.

Incredible. When Ars pull an article and its comments, they wipe the public XenForo forum thread too, but Scott's post there was archived. Username scottshambaugh:

https://web.archive.org/web/20260213211721/https://arstechni...

>Scott Shambaugh here. None of the quotes you attribute to me in the second half of the article are accurate, and do not exist at the source you link. It appears that they themselves are AI hallucinations. The irony here is fantastic.

Instead of cross-checking the fake quotes against the source material, some proud Ars Subscriptors proceed to defend Condé Nast by accusing Scott of being a bot and/or fake account.

EDIT: Page 2 of the forum thread is archived too. This poster spoke too soon:

>Obviously this is massive breach of trust if true and I will likely end my pro sub if this isnt handled well but to the credit of ARS, having this comment section at all is what allows something like this to surface. So kudos on keeping this chat around.

  • I read the forum thread, and most people seem to be critical of ars. One person said scott is a bot, but this read to me as a joke about the situation

Aurich Lawson (creative director at Ars) posted a comment[0] in response to a thread about what happened, the article has been pulled and they'll follow-up next week.

[0]: https://arstechnica.com/civis/threads/journalistic-standards...

Yikes I subscribed to them last year on the strength of their reporting in a time where it's hard to find good information.

Printing hallucinated quotes is a huge shock to their credibility, AI or not. Their credibility was already building up after one of their long time contributors, a complete troll of a person that was a poison on their forums, went to prison for either pedophilia or soliciting sex from a minor.

Some serious poor character judgement is going on over there. With all their fantastic reporters I hope the editors explain this carefully.

  • TBF even journalists who interview people for real and take notes routinely quite them saying things they didn't say. The LLMs make it worse, but it's hardly surprising behaviour from them

    • I knew first hand about a couple of news in my life. Both were reported quite incorrectly. That was well before LLMs. I assume that every news is quite inaccurate, so I read/hear them to get the general gist of what happened, then I research the details if I care about them.

    • It's surprising behavior to come from Ars Technica. But also when journalists misquote it's through a different phrasing of something that Pepe have actually said, sometimes with different emphasis or eve meaning. But of the people I've known who have been misquoted it's always traceable to something they actually did say.

  • > Their credibility was already building up ...

    Don't you mean diminishing or disappearing instead of building up?

    Building up sounds like the exact opposite of what I think you're meaning. ;)

    • I think they meant it had taken a huge hit and was in the process of building up again

The amount of effort to click an LLM’s sources is, what, 20 seconds? Was a human in the loop for sourcing that article at all?

  • Humans aren't very diligent in the long term. If an LLM does something correctly enough times in a row (or close enough), humans are likely to stop checking its work throughly enough.

    This isn't exactly a new problem we do it with any bit of new software/hardware, not just LLMs. We check its work when it's new, and then tend to trust it over time as it proves itself.

    But it seems to be hitting us worse with LLMs, as they are less consistent than previous software. And LLM hallucinations are partially dangerous, because they are often plausible enough to pass the sniff test. We just aren't used to handling something this unpredictable.

    • There's a weird inconsistency among the more pro-AI people that they expect this output to pass as human, but then don't give it the review that an outsourced human would get.

      2 replies →

    • The irony is that while from perfect, an LLM-based fact-checking agent is likely to be far more dilligent (but still needs human review as well) by nature of being trivial to ensure it has no memory of having done a long list of them (if you pass e.g. Claude a long list directly in the same context, it is prone to deciding the task is "tedious" and starting to take shortcuts).

      But at the same time, doing that makes it even more likely the human in the loop will get sloppy, because there'll be even fewer cases where their input is actually needed.

      I'm wondering if you need to start inserting intentional canaries to validate if humans are actually doing sufficiently torough reviews.

  • The kind of people to use LLM to write news article for them tend not to be the people who care about mundane things like reading sources or ensuring what they write has any resemblance to the truth.

  • The source would just be the article, which the Ars author used an LLM to avoid reading in the first place.

  • The problem is that the LLM's sources can be LLM generated. I was looking up some health question and tried clicking to see the source for one of the LLMs claim. The source was a blog post that contained an obvious hallucination or false elaboration.

  • If a human had enough time to check all the sources they wouldn't have been using an LLM to write for them.

It’s fascinating that on the one hand Ars Technica didn’t think the article was worth writing (so got an LLM to do it) but expect us to think it’s worth reading. Then some people don’t think it’s worth reading (so get an LLM to do it) but think somehow we will think it’s not worth reading the article but is worth reading the llm summary. Feel like you can carry on that process ad infinitum always going for a smaller and smaller audience who are somehow willing to spend less and less effort (but not zero).

> How many levels of outsourcing thinking is occurring to where it becomes a game of telephone

How do you know quantum physics is real? Or radio waves? Or just health advice? We don't. We outsource our thinking around it to someone we trust, because thinking about everything to its root source would leave us paralyzed.

Most people seem to have never thought about the nature of truth and reality, and AI is giving them a wake-up call. Not to worry though. In 10 years everyone will take all this for granted, the way they take all the rest of the insanity of reality for granted.

Has it been shown or admitted that the quotes were hallucinations, or is it the presumption that all made up content is a hallucination now?

  • Another red flag is that the article used repetitive phrases in an AI-like way:

    "...it illustrates exactly the kind of unsupervised output that makes open source maintainers wary."

    followed later on by

    "[It] illustrates exactly the kind of unsupervised behavior that makes open source maintainers wary of AI contributions in the first place."

    • I used to be skeptical that AI generated text could be reliably detected, but after a couple years of reading it, there are cracks starting to form in that skepticism.

  • Gen AI only produces hallucinations (confabulations).

    The utility is that the infrenced output tends to be right much more often than wrong for mainstream knowledge.

  • You could read the original blog post...

    • How could that prove hallucinations? It could only possibly prove that they are not. If the quotes are in the original post then they are not hallucinations. If they are not in the post they could be caused by something is not a LLM.

      Misquotes and fabricated quotes have existed long before AI, And indeed, long before computers.

      3 replies →

  • [flagged]

    • I think you're missing their point. The question you're replying to is, how do we know that this made up content is a hallucination. Ie., as opposed to being made up by a human. I think it's fairly obvious via Occam's Razor, but still, they're not claiming the quotes could be legit.

      3 replies →

    • There is a third option: The journalist who wrote the article made the quotes up without an LLM.

      I think calling the incorrect output of an LLM a “hallucination” is too kind on the companies creating these models even if it’s technically accurate. “Being lied to” would be more accurate as a description for how the end user feels.

      1 reply →

Ironically, if you actually know what you’re doing with an LLM, getting a separate process to check the quotations are accurate isn’t even that hard. Not 100% foolproof, because LLM, but way better than the current process of asking ChatGPT to write something for you and then never reading it before publication.

  • The wrinkle in this case is the author blocked AI bots from their site (doesn't seem to be a mere robots.txt exclusion from what I can tell), so if any such bot were trying to do this it may have not been able to read the page to verify, so instead made up the quotes.

    This is what the author actually speculated may have occurred with Ars. Clearly something was lacking in the editorial process though that such things weren't human verified either way.

More than ironic, it's truly outrageous, especially given the site's recent propensity for negativity towards AI. They've been caught red-handed here doing the very things they routinely criticize others for.

The right thing to do would be a mea-culpa style post and explain what went wrong, but I suspect the article will simply remain taken down and Ars will pretend this never happened.

I loved Ars in the early years, but I'd argue since the Conde Nast acquisition in 2008 the site has been a shadow of its former self for a long time, trading on a formerly trusted brand name that recent iterations simply don't live up to anymore.

  • Is there anything like a replacement? The three biggest tech sites that I traditionally love are ArsTechnica, AnandTech(rip), and Phoronix. One is dead man walking mode, the second is ded dead, and the last is still going strong.

    I'm basically getting tech news from social media sites now and I don't like that.

    • In my wildest hopes for a positive future, I hope disenchanted engineers will see things like this as an opportunity to start our own companies founded on ideals of honesty, integrity, and putting people above profits.

      I think there are enough of us who are hungry for this, both as creators and consumers. To make goods and services that are truly what people want.

      Maybe the AI revolution will spark a backlash that will lead to a new economy with new values. Sustainable business which don't need to squeeze their customers for every last penny of revenue. Which are happy to reinvest their profits into their products and employees.

      Maybe.

    • ServeTheHome has something akin to the old techy feel, but it has its own specific niche.

  • Conde Nast are the same people wearing Wired magazine like a skin suit, publishing cringe content that would have brought mortal shame upon the old Wired.

  • While their audience (and the odd staff member) is overwhelming anti AI in the comments, the site itself overall editorially doesn't seem to be.

  • Outrageous, but more precisely malpractice and unethical to not double check the result.

  • Probably "one bad apple", soon to be fired, tarred and feathered...

    • Scapegoats are scapegoats but in every organization the problems are ultimately caused by their leaders. It's what they request or what they fail to request and what they lack to control.

Honestly frustrating that Scott chose not to name and shame the authors. Liability is the only thing that's going to stop this kind of ugly shit.

  • There is no need to rush to judgment on the internet instant-gratification timescale. If consequences are coming for journalist or publication, they are inevitable.

    We’ll know more in only a couple days — how about we wait that long before administering punishment?

    • It's not rushing to judgement, the judgement has been made. They published fraudulent quotes. Bubbling that liability up to Arse Technica is valuable for punishing them too but the journalist is ultimately responsible for what they publish too. There's no reason for any publication to ever hire them again when you can hire ChatGPT to lie for you.

      EDIT: And there's no plausible deniability for this like there is for typos, or maligned sources. Nobody typed these quotes out and went "oops, that's not what Scott said". Benj Edwards or Kyle Orland pulled the lever on the bullshit slot machine and attacked someone's integrity with the result.

      "In the past, though, the threat of anonymous drive-by character assassination at least required a human to be behind the attack. Now, the potential exists for AI-generated invective to infect your online footprint."

      3 replies →

  • I mean, I'm even more frustrated by this in Scott's original post:

    > If you are the person who deployed this agent, please reach out. It’s important for us to understand this failure mode, and to that end we need to know what model this was running on and what was in the soul document. I’m not upset and you can contact me anonymously if you’d like.

    I can see where he's coming from, and I suppose he's being the bigger man in the situation, but at some point one of these reckless moltbrain kiddies is going to have to pay. Libel and extortion should carry penalties no matter whether you do it directly, or via code that you wrote, or via code that you deployed without reading it.

    The AI's hit piece on Scott was pretty minor, so if we want to wait around for a more serious injury that's fine, just as long as we're standing ready to prosecute when (not 'if') it happens.

  • I mean, he linked the archived article. You're one click away from the information if you really want to know.

I just wish people would remember how awful and unprofessional and lazy most "journalists" are in 2026.

It's a slop job now.

Ars Technica, a supposedly reputable institution, has no editorial review. No checks. Just a lazy slop cannon journalist prompting an LLM to research and write articles for her.

Ask yourself if you think it's much different at other publications.

  • I would assume that most who were journalists 10 years ago have now either gone independent or changed careers

    The ones that remain are probably at some extreme on one or more attributes (e.g. overworked, underpaid) and are leaning on genAI out of desperation.

  • I work with the journalists at a local (state-wide) public media organization. It's night and day different from what is described at ars. These are people who are paid a third (or less) of what a sales engineer at meta makes. We have editorial review and ban LLMs for any editorial work except maybe alt-text if I can convince them to use it. They're over-worked, underpaid, and doing what very few people here (including me) have the dedication to do. But hey, if people didn't hate journalists they wouldn't be doing their job.

Ars Technica has always trash even before LLMs and is mostly an advertisement hub for the highest bidder