Comment by thatguy0900
1 year ago
Invite only forums or forums with actual identity checking of some sort. Google and Facebook are in prime position to actually provide real online identity services to other websites, which makes Facebook itself developing bots even funnier. Maybe we'll eventually get bank/government issued online identity verification.
Online identity verification is the obvious solution, the only problem is that we would lose the last bits of privacy we have on the internet. I guess if everyone was forced to post under our real name and identity, we might treat each other with better etiquette, but...
> I guess if everyone was forced to post under our real name and identity, we might treat each other with better etiquette, but...
But Facebook already proved otherwise.
Optimistically, if all you want to do is prove you are, in fact, a person, and not prove that you are a specific person, there's no real reason to need to lose privacy. A service could vouch that you are a real person, verified on their end, and provide no context to the site owner as to what person you are.
That doesn't stop Verified Humans(TM) from copying and pasting AI slop into text boxes and pressing "Post." If there's really good pseudonymity, and Verified Humans can have as many pseudonyms as they like and they aren't connected to each other, one human could build an entire social network of fake pseudonyms talking to each other in LLM text but impeccable Verified Human labels.
1 reply →
I just want to semi-hijack this thread to note that you can actually peek into the future on this issue, by just looking at the present chess community.
For readers who are not among the cognoscenti on the topic: in 1997 supercomputers started playing chess at around the same level as top grandmasters, and some PCs were also able to be competitive (most notably, Fritz beat Deep Blue in 1995 before the Kasparov games, and Fritz was not a supercomputer). From around 2005, if you were interested in chess, you could have an engine on your computer that was more powerful than either you or your opponent. Since about 2010, there's been a decent online scene of people playing chess.
So the chess world is kinda what the GPT world will be, in maybe 30ish years? (It's hard to compare two different technology growths, but this assumes that they've both hit the end of their "exponential increase" sections at around the same time and then have shifted to "incremental improvements" at around the same rate. This is also assuming that in 5-10 years we'll get to the "Deep Blue defeats Kasparov" thing where transformer-based machine learning will be actually better at answering questions than, say, some university professors.)
The first thing is, proving that someone is a person, in general, is small potatoes. Whatever you do to prove that someone is a real person, they might be farming some or all of their thought process out to GPT.
The community that cares about "interacting with real humans" will be more interested in continuous interactions rather than "post something and see what answers I get," because long latencies are the places where GPT will answer your question and GPT will give you a better answer anyways. So if you care about real humanity, that's gonna be realtime interaction. The chess version is, "it's much harder to cheat at Rapid or Blitz chess."
The second thing is, privacy and nonprivacy coexist. The people who are at the top of their information-spouting games, will deanonymize themselves. Magnus Carlsen just has a profile on chess.com, you can follow his games.
Detection of GPT will look roughly like this: you will be chatting with someone who putatively has a real name and a physics pedigree, and you ask them to answer physics questions, and they appear to have a really vast physics knowledge, but then when you ask them a simple question like "and because the force is larger the accelerations will tend to be larger, right?" they take an unusually long time to say "yep, F = m a, and all that." And that's how you know this person is pasting your questions to a GPT prompt and pasting the answers back at you. This is basically what grandmasters look for when calling out cheating in online chess; on the one hand there's "okay that's just a really risky way to play 4D chess when you have a solid advantage and can just build on it with more normal moves" -- but the chess engine sees 20 moves down the road beyond what any human sees, so it knows that these moves aren't actually risky -- and on the other hand there's "okay there's only one reason you could possibly have played the last Rook move, and it's if the follow up was to take the knight with the bishop, otherwise you're just losing. You foresaw all of this, right?" and yet the "person" is still thinking (because the actual human didn't understand why the computer was making that rook move, and now needs the computer to tell them that the knight has to be taken with the bishop as appropriate follow-up).
1 reply →
My parents use a lot of Facebook - and things some people say under their real name are really mindblowing.
Posting with IRL identity removes the option to back down after a mistake and leads to much worse escalations, because public reputations will be at stake by default.
> with actual identity checking of some sort
I am hoping OpenID4VCI[0] will fill this role. It looks to be flexible enough to preserve public privacy on forums while still verifying you are the holder of a credential issued to a person. The credential could be issued from an issuer that can verify you are an adult (banks) for example. Then a site or forum etc, that works with a verifier that can verify whatever combination of data of one or more credentials presented. I haven't dug into the full details of implementation and am skimming over a lot but that appears to be the gist of it.
[0] https://openid.net/specs/openid-4-verifiable-credential-issu...