← Back to context

Comment by throwaway13337

3 days ago

These sorts of tools will only be able to positively identify a subset of genAI content. But I suspect that people will use it to 'prove' something is not genAI.

In a sense, the identifier company can be an arbiter of the truth. Powerful.

Training people on a half-solution like this might do more harm than good.

It will just be an arms race if we try to prove "not genAI." Detectors will improve, genAI will improve without marking (opensource and state actors will have unmarked genAI even if we mandate it).

Marking real from lense through digital life is more practical. But then what do we do with all the existing hardware that doesn't mark real and media that preexisited this problem.

  • I agree. A mechanism to voluntarily attach a certificate metadata about the media record from the device seems like a better idea. That still can be spoofed, though.

    In the end, society has always existed on human chains of trust. Community. As long as there are human societies, we need human reputation.

You could take a picture or video with your phone of a screen or projection of an altered media and thereby capture a watermarked "verified" image or video.

None of these schemes for validation of digital media will work. You need a web of trust, repeated trustworthy behavior by an actor demonstrating fidelity.

You need people and institutions you can trust, who have the capability of slogging through the ever more turbulent and murky sea of slop and using correlating evidence and scientific skepticism and all the cognitive tools available to get at reality. Such people and institutions exist. You can also successfully proxy validation of sources by identifying people or groups good at identifying primary sources.

When people and institutions defect, as many legacy media, platforms, talking heads, and others have, you need to ruthlessly cut them out of your information feed. When or if they correct their mistake, just follow tit for tat, and perhaps they can eventually earn back their place in the de-facto web of trust.

Google's stamp of approval means less than nothing to me; it's a countersignal, indicating I need to put even more effort than otherwise to confirm the truthfulness of any claims accompanied by their watermark.

It is actively harmful to society. Slap SynthID on some of the photographic evidence from the unreleased Epstein files and instantly de-legitimize it. Launder a SynthID image through a watermark free model and it's legit again. The fact that it exists at all can't be interpreted in any other way than malice.