← Back to context

Comment by foxfired

12 days ago

One thing I learned is that AI written text is not hard to spot. Usually, when I meet slop, I close it one or two paragraphs in. Although tools like this will become more common, they usually serve to win an argument, or confirm what you already believe.

Also, it was painful to learn that my very first blog post I wrote in 2013 is AI generated. But I'm fine with it because I read this:

> A short punchy opener (≤10 words) followed by two or more substantially longer elaboration sentences — the LLM "hook then evidence pile" rhythm.

... and realized that the entire app is AI generated.

If you can spot it, an AI can spot it too. We have a website with some AI generated content (about AI). I added a skill to correct AI slop. Content got a lot better when I put that in place. I actually made codex research slop patterns and it came up with a list of known AI slop linguistic anti patterns. It now fixes its own content using that list. I also put a guard rail in place to do a critical review of all produced content as a final quality gate. That actually catches a lot of baseless claims, and other slop. And there's another skill that ensures we use the right SEO relevant language (a list that is produced by a separate agent).

It's actually starting to generate interesting content based on me giving it a few bullets and ideas. I won't claim it's perfect but it does a decent enough job.

I have my reasons for doing this (we help people set up agentic work flows) and I appreciate that not everybody likes the idea of AI generated content. But I think it will start getting harder and harder to spot AI slop. Basically slop is what you get without guard rails and quality gates. Of course, most people still lack the skills to configure their AI tools properly. Particularly non technical people. But it's not that hard and I bet there are a few handy journalists out there getting better at this. Also, for technical writers this is not going to be optional.