Comment by blell

11 days ago

There’s no malice if there was no intention of falsifying quotes. Using a flawed tool doesn’t count as intention.

Outsourcing your job as a journalist to a chatbot that you know for a fact falsifies quotes (and everything else it generates) is absolutely intentional.

  • It's intentionally reckless, not intentionally harmful or intentionally falsifying quotes. I am sure they would have preferred if it hadn't falsified any quotes.

    • He's on the AI beat, if he is unaware that a chatbot will fabricate quotes and didn't verify them that is a level of reckless incompetence that warrants firing

      3 replies →

    • “In any statutory definition of a crime ‘malice’ must be taken not in the old vague sense of ‘wickedness’ in general, but as requiring either (i) an actual intention to do the particular kind of harm that was in fact done, or (ii) recklessness as to whether such harm should occur or not (ie the accused has foreseen that the particular kind of harm might be done, and yet has gone on to take the risk of it).” R v Cunningham

I think that is the crucial question. Often we lump together malice with "reckless disregard". The intention to cause harm is very close to the intention to do something that you know or should know is likely to cause harm, and we often treat them the same because there is no real way to prove intent, so otherwise everyone could just say they "meant no harm" and just didn't realize how harmful their actions could be.

I think that a journalist using an AI tool to write an article treads perilously close to that kind of recklessness. It is like a carpenter building a staircase using some kind of weak glue.

> Using a flawed tool doesn’t count as intention.

"Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here."

They aren't allowed to use the tool, so there was clearly intention.

Replace parent-poster's "malice" with "malfeasance", and it works well-enough.

I may not intend to burn someone's house down by doing horribly reckless things with fireworks... but after it happens, surely I would still bear both some fault and some responsibility.

Outsourcing writing to a bot without attribution may not be malicious, but it does strain integrity.

  • I don't think the article was written by an LLM; it doesn't read like it, it reads like it was written by actual people.

    My assumption is that one of the authors used something like Perplexity to gather information about what happened. Since Shambaugh blocks AI company bots from accessing his blog, it did not get actual quotes from him, and instead hallucinated them.

    They absolutely should have validated the quotes, but this isn't the same thing as just having an LLM write the whole article.

    I also think this "apology" article sucks, I want to know specifically what happened and what they are doing to fix it.

The issues with such tools are highly documented though. If you’re going to use a tool with known issues you’d better do your best to cover for them.

The tool when working as intended makes up quotes. Passing that off as journalism is either malicious or unacceptably incompetent.

They're expected by policy to not use AI. Lying about using AI is also malice.

  • We see a typical issue in modern online media: The policy is to not use AI, but he demands of content created per day makes it very difficult to not use AI... so the end result is undisclosed AI. This is all over the old blogosphere publications, regardless of who owns them. The ad revenue per article is just not great