Comment by DiskoHexyl
4 hours ago
I'ts an open secret that even the larger news outlets mandate LLM use. They buy subscriptions and have guidelines on how to mask the output (so that it would read less AI'ed), how to fact-check the links and the quotes etc. The authors which aren't willing to jump on this particular train are quickly let go due to performance.
The expectation is to produce more with much less (staff), the pipeline is heavily optimized for clicks, every single headline is A/B tested- Ars isn't alone in churning out poorly reviewed clickbait (and then not owning their mistakes)
Is there any evidence that Are Technica management induced this journalist to use AI, or are you just claiming it's an "open secret" and don't know anything about this specific incident? Because without any kind of details it kind of sounds like the latter, maybe motivated by a reflex to blame management whenever workers blunder? Unless there's evidence that a actually points at Ars Technica management, dismissing the journalists professional responsibilities using vague rumors doesn't seem appropriate.
I didn't state that Ars Technica specifically mandate LLM use for their authors. What I did state about them is that their editorial standards are lacking, and they tend to produce a lot of clickbait.
IMO the industry is in crisis
> I'ts an open secret that even the larger news outlets mandate LLM use.
Given the context, if you didn't intend for this to imply that Ars mandates LLM use, you should probably rewrite it.
How is it an open secret? Is there evidence for this? I still see typos in some articles so it feels like humans are in the loop, maybe that’s their AI masking?
I would love to hear more about this “open secret” - especially the guidance on how to “mask the output” etc. because as someone who works in news/media it’s news to me.