← Back to context

Comment by segasaturn

1 year ago

Online identity verification is the obvious solution, the only problem is that we would lose the last bits of privacy we have on the internet. I guess if everyone was forced to post under our real name and identity, we might treat each other with better etiquette, but...

> I guess if everyone was forced to post under our real name and identity, we might treat each other with better etiquette, but...

But Facebook already proved otherwise.

Optimistically, if all you want to do is prove you are, in fact, a person, and not prove that you are a specific person, there's no real reason to need to lose privacy. A service could vouch that you are a real person, verified on their end, and provide no context to the site owner as to what person you are.

  • That doesn't stop Verified Humans(TM) from copying and pasting AI slop into text boxes and pressing "Post." If there's really good pseudonymity, and Verified Humans can have as many pseudonyms as they like and they aren't connected to each other, one human could build an entire social network of fake pseudonyms talking to each other in LLM text but impeccable Verified Human labels.

    • The identity provider doesn't need to tell the forum that you are 50 different people. They could have a system where if the forum bans you the forum would know it's the same person they banned on reapplication. As far as people making a real person account then using that to do Ai stuff yeah there will have to be a way to persistently ban someone through anonymous verification, but thats possible. Both the identity verifier and forum will be incentivized to play nice with each other. If a identity provider is allowing one person to make 50 spam accounts the forum can stop accepting verification from that provider.

  • I just want to semi-hijack this thread to note that you can actually peek into the future on this issue, by just looking at the present chess community.

    For readers who are not among the cognoscenti on the topic: in 1997 supercomputers started playing chess at around the same level as top grandmasters, and some PCs were also able to be competitive (most notably, Fritz beat Deep Blue in 1995 before the Kasparov games, and Fritz was not a supercomputer). From around 2005, if you were interested in chess, you could have an engine on your computer that was more powerful than either you or your opponent. Since about 2010, there's been a decent online scene of people playing chess.

    So the chess world is kinda what the GPT world will be, in maybe 30ish years? (It's hard to compare two different technology growths, but this assumes that they've both hit the end of their "exponential increase" sections at around the same time and then have shifted to "incremental improvements" at around the same rate. This is also assuming that in 5-10 years we'll get to the "Deep Blue defeats Kasparov" thing where transformer-based machine learning will be actually better at answering questions than, say, some university professors.)

    The first thing is, proving that someone is a person, in general, is small potatoes. Whatever you do to prove that someone is a real person, they might be farming some or all of their thought process out to GPT.

    The community that cares about "interacting with real humans" will be more interested in continuous interactions rather than "post something and see what answers I get," because long latencies are the places where GPT will answer your question and GPT will give you a better answer anyways. So if you care about real humanity, that's gonna be realtime interaction. The chess version is, "it's much harder to cheat at Rapid or Blitz chess."

    The second thing is, privacy and nonprivacy coexist. The people who are at the top of their information-spouting games, will deanonymize themselves. Magnus Carlsen just has a profile on chess.com, you can follow his games.

    Detection of GPT will look roughly like this: you will be chatting with someone who putatively has a real name and a physics pedigree, and you ask them to answer physics questions, and they appear to have a really vast physics knowledge, but then when you ask them a simple question like "and because the force is larger the accelerations will tend to be larger, right?" they take an unusually long time to say "yep, F = m a, and all that." And that's how you know this person is pasting your questions to a GPT prompt and pasting the answers back at you. This is basically what grandmasters look for when calling out cheating in online chess; on the one hand there's "okay that's just a really risky way to play 4D chess when you have a solid advantage and can just build on it with more normal moves" -- but the chess engine sees 20 moves down the road beyond what any human sees, so it knows that these moves aren't actually risky -- and on the other hand there's "okay there's only one reason you could possibly have played the last Rook move, and it's if the follow up was to take the knight with the bishop, otherwise you're just losing. You foresaw all of this, right?" and yet the "person" is still thinking (because the actual human didn't understand why the computer was making that rook move, and now needs the computer to tell them that the knight has to be taken with the bishop as appropriate follow-up).

    • > you will be chatting with someone who putatively has a real name and a physics pedigree, and you ask them to answer physics questions, and they appear to have a really vast physics knowledge, but then when you ask them a simple question like "and because the force is larger the accelerations will tend to be larger, right?" they take an unusually long time to say "yep, F = m a, and all that." And that's how you know this person is pasting your questions to a GPT prompt and pasting the answers back at you.

      Honestly, (even) in my area of expertise, if the "abstraction/skill level" or the kind of wording (in your example: much less scientifically precise wording, "more like a 10 year old child asks"), it often takes me quite some time to adjust (it completely takes me out of my flow).

      So, your criterion would yield an insane amount of false positives on me.

My parents use a lot of Facebook - and things some people say under their real name are really mindblowing.

Posting with IRL identity removes the option to back down after a mistake and leads to much worse escalations, because public reputations will be at stake by default.