Comment by therobots927
5 hours ago
“ Over 1.5 million of those reports involved generative AI. Some of this material depicts entirely fictional children. But a growing share is generated using the likenesses of real, identifiable children — children who have never suffered contact abuse, but who are now victims nonetheless. And all of it — real or synthetic — floods into the same investigation pipeline, where human analysts must treat every image as potentially depicting a real child in danger.”
If any of the leading AI companies are looking to get back in the good graces of the public, they should seriously think about releasing an open source model that reliably labels media (text, photo or video) with a probability said media is AI generated.
There is a 0% chance they don’t already have models for this to prevent feeding their models AI generated training data. So release it.
That's a nice thought but the unfortunate technical reality is that AI content detection tools have never worked reliably and probably never will.
https://deepmind.google/models/synthid/