Comment by lrvick
5 hours ago
If there are ad incentives, assume all content is fake by default.
On the actual open decentralized internet, which still exists, mastodon, IRC, matrix... bots are rare.
5 hours ago
If there are ad incentives, assume all content is fake by default.
On the actual open decentralized internet, which still exists, mastodon, IRC, matrix... bots are rare.
That’s not because it’s decentralized or open, it’s because it doesn’t matter. If it was larger or more important, it would get run over by bots in weeks.
Any platform that wants to resist bots need to - tie personas to real or expensive identities - force people to add AI flag to AI content - let readers filter content not marked as AI - and be absolutely ruthless in permabanning anyone who posts AI content unmarked, one strike and you are dead forever
The issue then becomes that marking someone as “posts unmarked AI content” becomes a weapon. No idea about how to handle it.
It's never going to happen, but I felt we solved all of this with forums and IRC back in the day. I wish we gravitated towards that kind of internet again.
Group sizes were smaller and as such easier to moderate. There could be plenty of similar interest forums which meant even if you pissed of some mods, there were always other forums. Invite only groups that recruited from larger forums (or even trusted members only sections on the same forum) were good at filtering out low value posters.
There were bots, but they were not as big of a problem. The message amplification was smaller, and it was probably harder to ban evade.
> I wish we gravitated towards that kind of internet again.
So do it. Forums haven't gone away, you just stopped going to them. Search for your special interest followed by "Powered by phpbb" (or Invision Community, or your preferred software) and you'll find plenty of surprisingly active communities out there.
You're describing Discord today
1 reply →
>and be absolutely ruthless in permabanning anyone who posts AI content unmarked,
It would certainly be fun to trick people I dislike into posting AI content unknowingly. Maybe it has to be so low-key that they aren't even banned on the first try, but that just seems ripe for abuse.
I want a solution to this problem too, but I don't think this is reasonable or practical. I do wonder what it would mean if, philosophically, there were a way to differentiate between "free speech" and commercial speech such that one could be respected and the other regulated. But if there is such a distinction I've never been able to figure it out well enough to make the argument.
Of course - because everyone is banned upon first suspicion.
Usenet died partly due to the ads, and the inability for adblocking software at the time to keep up.
People left and never came back.
But those bots were certainly around in the 90s
Worst is... the bots, spam and ads are still there, even if there is no-one to read them. Usenet might still be alive (for piracy/binaries at least), and maybe a handfull of still-alive text-groups, but in the text-groups I used to read, it's nothing but a constant flow of spam since 15+ years.