← Back to context

Comment by andrewflnr

9 days ago

People put a lot of weight on blame-free post-mortems and not punishing people who make "mistakes", but I believe that has to stop at the level of malice. Falsifying quotes is malice. Fire the malicious party or everything else you say is worthless.

That don't actually say it's a blame free post-mortem, nor is it worded as such. They do say it's their policy not to publish AI generated anything unless specifically labelled. So the assumption would be that someone didn't follow policy and there will be repercussions.

The problem is people on the Internet, hn included, always howl for maximalist repercussions every time. ie someone should be fired. I don't see that as a healthy or proportionate response, I hope they just reinforce that policy and everyone keeps their jobs and learns a little.

  • Most of the time a firing is not a reasonable or helpful response to a mistake.

    This was not a mistake.

  • > That don't actually say it's a blame free post-mortem, nor is it worded as such.

    Correct, I only mentioned the blame-free post-mortem thing to head off the usual excuses, as a shorthand for the general approach. It has merits in many/most circumstances.

    > I don't see that as a healthy or proportionate response,

    Again, correct. It's only appropriate in cases of malice.

    • Hanlon's razor is a farce. There are no unintentional acts, the drunk driver takes off because he thinks he has to get back as fast as possible, the sick man invokes AI to write his article because he must hit the deadline.

      1 reply →

Yes. This is being treated as thought it were a mistake, and oh, humans make mistakes! But it was no mistake. Possibly it was a mistake on the part of whoever was responsible for reviewing the article before publication didn't catch it. But plagiariasm and fabrication require malicious intent, and the authors responsible engaged in both.

  • > Possibly it was a mistake on the part of whoever was responsible for reviewing the article before publication didn't catch it

    My wife, former journalist, said that you don’t direct quote anyone without talking to them first and verifying what you’re quoting is for sure from them. The she said “I guess they have no editors?” because in her experience editors aren’t like fact checkers but they’re suppose to have the experience and wisdom to ask questions about the content to make sure everything is kosher before going to print. Seems like multiples errors in judgement from multiple parts of the organization.

    (My wife left journalism about 15 years ago so maybe things are different but that was her initial reaction)

    • In this case, the article was quoting a blog post, so presumably the editor (it _does_ look like there was one) took the arguably-not-unreasonable stance that _obviously_ the author wouldn't have fabricated quotes from a blog post they're literally linking to, that would be _insane_, nobody would do that. And thus that they didn't need to check.

      And that might be a semi-justifiable stance if dealing with a human.

      One of the many problems with our good friends the magic robots is that they don't just do incorrect stuff, they do _weird_ incorrect stuff, that a human would be unlikely to do, so it can fly under the radar.

    • > My wife left journalism about 15 years ago so maybe things are different

      Ya, they are quite different!

Blameless post-mortems work really well when you use them to fix process issues. In this case, you'd identify issues like "not all quotes are fact checked because our submissions to editorial staff don't require sources and the checklist doesn't require fact checks", "the journalist worked while sick because we were understaffed", "nothing should ever be copy-pasted from an LLM", etc.

There’s no malice if there was no intention of falsifying quotes. Using a flawed tool doesn’t count as intention.

  • Outsourcing your job as a journalist to a chatbot that you know for a fact falsifies quotes (and everything else it generates) is absolutely intentional.

    • It's intentionally reckless, not intentionally harmful or intentionally falsifying quotes. I am sure they would have preferred if it hadn't falsified any quotes.

      5 replies →

  • I think that is the crucial question. Often we lump together malice with "reckless disregard". The intention to cause harm is very close to the intention to do something that you know or should know is likely to cause harm, and we often treat them the same because there is no real way to prove intent, so otherwise everyone could just say they "meant no harm" and just didn't realize how harmful their actions could be.

    I think that a journalist using an AI tool to write an article treads perilously close to that kind of recklessness. It is like a carpenter building a staircase using some kind of weak glue.

  • > Using a flawed tool doesn’t count as intention.

    "Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here."

    They aren't allowed to use the tool, so there was clearly intention.

  • Replace parent-poster's "malice" with "malfeasance", and it works well-enough.

    I may not intend to burn someone's house down by doing horribly reckless things with fireworks... but after it happens, surely I would still bear both some fault and some responsibility.

  • Outsourcing writing to a bot without attribution may not be malicious, but it does strain integrity.

    • I don't think the article was written by an LLM; it doesn't read like it, it reads like it was written by actual people.

      My assumption is that one of the authors used something like Perplexity to gather information about what happened. Since Shambaugh blocks AI company bots from accessing his blog, it did not get actual quotes from him, and instead hallucinated them.

      They absolutely should have validated the quotes, but this isn't the same thing as just having an LLM write the whole article.

      I also think this "apology" article sucks, I want to know specifically what happened and what they are doing to fix it.

  • The issues with such tools are highly documented though. If you’re going to use a tool with known issues you’d better do your best to cover for them.

  • The tool when working as intended makes up quotes. Passing that off as journalism is either malicious or unacceptably incompetent.

  • They're expected by policy to not use AI. Lying about using AI is also malice.

    • We see a typical issue in modern online media: The policy is to not use AI, but he demands of content created per day makes it very difficult to not use AI... so the end result is undisclosed AI. This is all over the old blogosphere publications, regardless of who owns them. The ad revenue per article is just not great

I'm curious if you've read the author's Bluesky statement (which wasn't available when you made your comment) and what you think of it?

  • I'll admit that at least looks consistent with extreme carelessness rather than lying. I don't find it terribly convincing, though. I find it a suspiciously long chain of excuses perfectly calibrated to excuse the events. The description gets vague right at the critical point where AI output gets laundered into journalistic output, and the part about the tool being strictly to gather "verbatim source material" sounds like the narrow end of a wedge of excuses for something that actually doesn't do that. But I don't have the background to tell with confidence whether he's lying. If it turns out he's not, well, I'd feel a little bad, but I still wouldn't respect him.

    I certainly stand by my broader claim that lying is fireable.

    • Well I appreciate you taking the time to respond and acknowledge the new evidence. I agree with the broad point that dishonesty can't be tolerated in a newsroom. And it's a "Caesar's wife must be beyond reproach" situation, the appearance of dishonesty is very bad regardless of the reality. And despite what Orland claims I do think there's blame to go around for not catching the mistake (assuming we accept his account).

      For what it's worth, the post below talks about experimenting with Claude Code but also having COVID in December. I don't know what to think of that, I did work with a guy who just kept catching COVID (or at least he said that and I believed him, I didn't swab him personally or anything), but it is weird for him to have COVID in December and February.

      https://arstechnica.com/information-technology/2026/01/10-th...

      3 replies →

At this point anyone reporting on tech should know the problems with AI. As such even if AI is used for research and articles are written on that output by human there is still absolute unquestionable expectation to do the standard manual verification of facts. Not doing it is pure malice.

I don’t see how you could know that without more information. Using an AI tool doesn’t imply that they thought it would make up quotes. It might just be careless.

Assuming malice without investigating is itself careless.

  • > Using an AI tool doesn’t imply that they thought it would make up quotes

    He covers AI for Ars Technica. Like, if he doesn't know that chatbots make shit up...

    FWIW I suspect that a lot of the problem here was that he was _working while he had a high fever_. This is a really bad idea.

  • we are fucking doomed holy shit

    we're really at the point where people are just writing off a journalist passing off their job to a chatgpt prompt as though that's a normal and defensible thing to be doing

    • No one said it was defensible. They drew a distinction between incompetence and malice. Let's not misquote each other here in the comments.

      7 replies →