Comment by WillPostForFood
18 hours ago
*You need an authoratitive source to say "This person is real"*
Does that even accomplish much? It may cut down on mass fake account creation. But, real people can then create authenticated account, and use an LLM to post as an authenticated real person.
Yeah, that's a problem, you're right. There are some ways to migitate it, but they introduce their own issues. Like say you give someone only 1 ID for their lifetime, they start to spam AI crap, you ban their ID - sounds ok except who is available to police all 8 billion IDs and determine if they're spamming? Who polices the police? What if these IDs become critical for conducting commerce and banning someone is massively detrimental to their finances? Etc. These problems aren't necessarily unsolvable though - but they are super difficult.
If there's only 1 or just a handful of verifiers, then a human can at most go through a few of those credentials before they run out. The risk is of course getting someone else's credential but that isn't as big an issue, especially for smaller online communities.
you under estimate human population in certain countries, literally
I just don't see a world where a small community ends up having to deal with a dedicated set of potentially spoofed identities. There are already tools like slow-downs and post limits for new members that can protect against this. HN is the biggest community I'm in by an order of magnitude and it's the only community I know that can't just use a slow mode type mechanic to halt this kind of attack.
4 replies →
> But, real people can then create authenticated account, and use an LLM to post as an authenticated real person.
They can, but ideally they wouldn't be able to make infinite accounts with that authenticated status. So it would still reduce the number of bot posters on the web
There is actually a different problem with this: Suppose there is a major vulnerability in some popular device. 50 million people get compromised; the attacker can now impersonate any of them at will. They go around and create 50 million accounts on various services, or take over the user's existing account on that service.
What are you going to do with their identities at that point? These are real people. If you ban them, you're banning the innocent victim rather than the attacker who still has 49,999,999 more accounts. But if you let them recover their accounts or create new ones, well, the attacker is going to do that too, with all 50 million accounts, as many times as they can. You don't know if this is the attacker coming back for the tenth time to create another spam account or if it's the real victim trying to reclaim their stolen identity.
So are you going to retaliate against the innocent victims by banning them permanently, or are you going to let the attackers keep recycling the same identities because a lot of people can go years without realizing their device is compromised and being used to create accounts on services they don't use?
Yeah that's a big problem. Pretty sure you can see it in real life where lots of old dead accounts with weak passwords on facebook or twitter eventually get hacked. It must be pretty weird to see your dead grampa suddenly start trying to get people to buy some weird scammy crypto.
I guess you could have an eyeball scanner at your computer that only sends out a binary "yes this person is human" to the system every time the log in. That sounds expensive and hackable and just janky though.
Maybe it would result in people taking Internet security seriously and holding companies accountable for data breaches if there were this sort of consequences for it