Comment by akoboldfrying
8 days ago
I think that, ultimately, systems that humans use to interact on the internet will have to ditch anonymity. If people can't cheaply and reliably distinguish human output from LLM output, and people care about only talking to humans, we will need to establish authenticity via other mechanisms. In practice that means PKI or web of trust (or variants/combinations), plus reputation systems.
Nobody wants this, because it's a pain, it hurts privacy (or easily can hurt it) and has other social negatives (cliques forming, people being fake to build their reputation, that episode of Black Mirror, etc.). Anonymity is useful like cash is useful. But if someone invents a machine that can print banknotes that fool 80% of people, eventually cash will go out of circulation.
I think the big question is: How much do most people actually care about distinguishing real and fake comments? It hurts moderators a lot, but most people (myself included) don't see this pain directly and are highly motivated by convenience.
We will ditch anonymity, but for pseudonymity, not eponymity. Meaning someone, somewhere will know who is who and can attest that 1000 usernames are humans, but people will be able to identify with just a username to everyone else, except that one person.
>In practice that means PKI or web of trust (or variants/combinations), plus reputation systems.
Yep, that is the way.
Also LLMs will help us create new languages or dialects from existing languages, with the purpose of distinguishing the inner group of people from the outer group of people, and the outer group of LLMs as well. We are in a language arms race for that particular purpose for thousands of years. Now LLMs are one more reason for the arms race to continue.
If we focus for example in making new languages or dialects which sound better to the ear, LLMs have no ears, it is always humans who will be one step ahead of the machine, providing that the language evolves non stop. If it doesn't evolve all the time, LLMs will have time to catch up. Ears are some of the more advanced machinery on our bodies.
BTW I am making right now a program which takes a book written in Ancient Greek and creates an audiobook, or videobook automatically using Google's text to speech. The same program on Google Translate website.
I think people will be addicted in the future with how new languages sound or can be sung.
Well if I’m in a discussion I’d like to know whether the other participants are actual people or just slopmachines (“AIs”).
I have made it a point to be un-anonymous, for the last few years. If you look at my HN handle, it's easy to see who I am, and to look at my work.
This was not always the case. I used to be a Grade A asshole, and have a lot to atone for.
I also like to make as much of my work open, as I can.
No we won't. Just build your web of trust and leave the rest of us anonymous and alone.
You're just doing the bidding of corporations who want to sell ID online systems for a more authoritarian world.
Those systems also use astroturfing. It was not invented with LLMs.
See my other comment https://news.ycombinator.com/item?id=44130743#44150878 for how this is "bleak" mostly if you were comfortable with your Overton window and censorship.
> leave the rest of us anonymous and alone
No one is trying to take away your right to host or participate in anonymous discussions.
> Those systems also use astroturfing. It was not invented with LLMs.
No one is claiming that LLMs invented astroturfing, only that they have made it considerably more economical.
> You're just doing the bidding of corporations who want to sell ID online systems for a more authoritarian world.
Sure, man. Funny that I mentioned "web of trust" as a potential solution, a fully decentralised system designed by people unhappy with the centralised nature of PKI. I guess I must be working in deep cover for my corporate overlords, cunningly trying to throw you off the scent like that. But you got me!
If you want to continue drinking from a stream that's been becoming increasingly polluted since November 2022, you're welcome to do so. Many other people don't consider this an appealing tradeoff and social systems used by those people are likely to adjust accordingly.
> Sure, man. Funny that I mentioned "web of trust" as a potential solution, a fully decentralised system designed by people unhappy with the centralised nature of PKI. I guess I must be working in deep cover for my corporate overlords, cunningly trying to throw you off the scent like that. But you got me!
I'm sorry man, I can't trust anything you say unless you post your full name and address. I can also throw some useless strawman quip to distract the conversation.
No one is forcing you to stay up at night or worry about this, so don't.
> If you want to continue drinking from a stream that's been becoming increasingly polluted since November 2022, you're welcome to do so. Many other people don't consider this an appealing tradeoff and social systems used by those people are likely to adjust accordingly.
Lol. The naivety of people like you and throwing these cute dates to start having a semblance of critical reading is hilarious. Not that it helps you since you immediately want authoritarian solutions and any challenge is met with a strawman.
But hey, give us more "sarcasm".
I'll post your comment because it's worth reading in full and going back to your "are you crazy? No one is saying X" fallback.
> I think that, ultimately, systems that humans use to interact on the internet will have to ditch anonymity. If people can't cheaply and reliably distinguish human output from LLM output, and people care about only talking to humans, we will need to establish authenticity via other mechanisms. In practice that means PKI or web of trust (or variants/combinations), plus reputation systems.
> Nobody wants this, because it's a pain, it hurts privacy (or easily can hurt it) and has other social negatives (cliques forming, people being fake to build their reputation, that episode of Black Mirror, etc.). Anonymity is useful like cash is useful. But if someone invents a machine that can print banknotes that fool 80% of people, eventually cash will go out of circulation.
> I think the big question is: How much do most people actually care about distinguishing real and fake comments? It hurts moderators a lot, but most people (myself included) don't see this pain directly and are highly motivated by convenience.
Lol.
4 replies →
You could have authenticated proofs of human-ness without providing your full identity. There are similar systems today which can prove your age without providing your full identity.
> I think that, ultimately, systems that humans use to interact on the internet will have to ditch anonymity.
Relevant meme video (which watching is in my opinion worth your time):
You ditch anonymity, and you have this cascading chilling effect through the interwebs because you cannot moderate communities against the political head winds of your nations.
Worse, it won’t work. We are already able to create fake human accounts, and it’s not even a contest.
And with LLMs, I can do some truly nefarious shit. I could create articles about some discovery of an unknown tribe in the Amazon, populate some unmanned national Wikipedia version with news articles, and substantiate the credentials of a fake anthropologist, and use that identity to have a bot interact with people.
Heck I am bad at this, so someone is already doing something worse than what I can imagine.
Essentially, we can now cheaply create enough high quality supporting evidence for proof of existence. We can spoof even proof of life photos to the point that account take overs resolution tickets can’t be sure if the selfies are faked. <Holy shit, I just realized this. Will people have to physically go to Meta offices now to recover their accounts???>
Returning to moderation, communities online, and anonymity:
The reason moderation and misinformation has been the target of American Republican Senators is because the janitorial task of reducing the spread of conspiracy theories touched the conduits carrying political powers.
That threat to their narrative production and distribution capability has unleashed a global campaign to target moderation efforts and regulation.
Dumping anonymity requires us to basically jettison ye olde internet.
I kind of wonder if I care if comments are real people and actually probably don’t as long as they’re thought provoking. I actually thought it would be an interesting experiment to make my own walled garden LLM link aggregator, sans all the rage bait.
I mean, I care if meetup.com has real people, and I care if my kids’ schools Facebook group has real people, and other forums where there is an expectation of online/offline coordination, but hacker news? Probably not.
I feel like part of why comments here are thought provoking is because they're grounded in something? It's not quite coordination, but if someone talks about using software at a startup or small company I do assume they're genuine about that, which tells you more about something being practical in the real world.
And use cases like bringing up an issue on HN to get companies to reach out to you and fix it would be much harder with llms taking up the bandwidth probably.
Yeah, this is the trick, for example in the sort of private hacker news example I was talking about creating, I haven’t created it yet, but I sort of suspect that getting the comments to not sound canned would take a lot of prompt engineering, and I also suspect that even if say an individual comment is good, the style over time would be jarring.
On the internet, maybe you have people using character.io, or other complex prompts to make the comments sound more diverse and personal. Who knows.
I wonder how many different characters you would need on a forum like hacker news to pass a sort of collective Turing test.
I could understand that position, except that I don't think most LLM generated text are for the purpose of producing thought provoking conversation.
My expectation would be that anyone going through the effort to put a LLM generated comment bot online is doing it for some ulterior motive, typically profit or propaganda.
Given this, I would equate not caring about the provenance of the comment, to not caring if you're being intentionally misinformed for some deceptive purpose.
Agree. Another complicating factor for detection is that I don't personally mind seeing a sliver of self-promotion in a comment/post if I feel it's "earned" by the post being on-topic and insightful overall. If such a comment was posted by an LLM, I think I would actually be fine with that.