← Back to context

Comment by jillesvangurp

11 days ago

If you can spot it, an AI can spot it too. We have a website with some AI generated content (about AI). I added a skill to correct AI slop. Content got a lot better when I put that in place. I actually made codex research slop patterns and it came up with a list of known AI slop linguistic anti patterns. It now fixes its own content using that list. I also put a guard rail in place to do a critical review of all produced content as a final quality gate. That actually catches a lot of baseless claims, and other slop. And there's another skill that ensures we use the right SEO relevant language (a list that is produced by a separate agent).

It's actually starting to generate interesting content based on me giving it a few bullets and ideas. I won't claim it's perfect but it does a decent enough job.

I have my reasons for doing this (we help people set up agentic work flows) and I appreciate that not everybody likes the idea of AI generated content. But I think it will start getting harder and harder to spot AI slop. Basically slop is what you get without guard rails and quality gates. Of course, most people still lack the skills to configure their AI tools properly. Particularly non technical people. But it's not that hard and I bet there are a few handy journalists out there getting better at this. Also, for technical writers this is not going to be optional.