← Back to context

Comment by omgmo

3 days ago

What about spoofing a SynthID false positive for a real image or video? Who can arbitrate what is true?

I think that AI service providers should have safeguards and encoded attribution. This solution helps when people lazily share things with friends or on social media I suppose, rather than stopping motivated bad actors.

The only way to actually implement this I think would be to ban all local models, and to have the service providers store perceptual hashes all generated images and video. It feels like the cat's out of the bag already though (for images at least).