Comment by alwa
4 hours ago
I agree that it feels like the tiny window of opportunity hasn't quite shut yet, and it's a problem space I know I should take more interest in. What do you see as the viable technical directions? Something along the lines of what Altman was trying to do with his Orb? Something along the lines of the C2PA's Content Credentials?
[0] e.g. https://www.businessinsider.com/sam-altman-tools-for-humanit... and the feature piece at https://time.com/7288387/sam-altman-orb-tools-for-humanity/
Instead of collecting biometric info from humans and IDing all of their online movements, you could mandate that LLM output be watermarked (using a technology that Scott Arronson was hired by OpenAI to help develop, after which the project was shut down under Altman right after proving that it could work) so that their online movement would be IDed. The implication in this story that it was shut down to keep the Orb around in principle (telling humans they had to be tagged to distinguish them from machines that could more easily be tagged) is very easy to pick up.
Is that viable given the proliferation of open-weight LLMs that don't apply that sort of watermarking? If somebody with malign intent can skip the attestation, presumably they will, right?
If every major LLM producer did it, it would not be easy for malicious actors to remove it. That's on top of the fact that most of the issue is coming from careless actors.