Comment by AnthonyMouse
3 days ago
> I just like, can't help but start with fuck these companies. All other arguments are downstream of that.
The solution would then be to break them up or do things like require adversarial interoperability, rather than ineffective non-sequiturs like requiring them to ID everyone.
The perverse incentive comes from a single company sitting on a network effect. You have to use Facebook because other people use Facebook, so if the algorithm shows you trash and rage bait you can't unilaterally decide to leave without abandoning everyone still there, and the Facebook company gets to show ads to everyone who uses it and therefore wants to maximize everyone's time wasted on Facebook, so the algorithm shows you trash and rage bait.
Now suppose they're not allowed to restrict third party user agents. You get a messaging app and it can send messages to people on Facebook, Twitter, SMS, etc. all in the same interface. It can download the things in "your feed" and then put it in a different order, or filter things out, and again show content from multiple services in the same interface, including RSS. And then that user agent can do things like filter out adult content, if you want it to.
We need to fix the actual problem, which is that the hosting service shouldn't be in control of the user interface to the service.
Indeed "Interoperability" is what would hurt social media giants the most - Cory Doctorow recently held an excellent talk where he stated that back in the early 00s Facebook (and others) used interoperability to offer services that allowed to interact, push and pull to mySpace (the big dog back then) to siphon off their users and content. But once Facebook became the dominant player, they moved to make the exact tactics they used (Interoperability and automation) illegal. Talking about regulatory capture ...
> ineffective non-sequiturs like requiring them to ID everyone.
Is that really a non-sequitur though? Cigarettes are harmful and addictive so their sale is age gated. So too for alcohol. Gambling? Also yes. So wouldn't age gating social media be entirely consistent in that case?
Not that I'm necessarily in favor of it. I agree that various other regulations, particularly interoperability, would likely address at least some of the underlying concerns. But then I think it might not be such a bad idea to have all of the above rather than one or the other.
If I went to the store and asked for a pack of cigarettes, I show my ID (well, I would if I was carded, but I'm no longer carded :)) and the clerk looks at it, maybe scans it, then takes my money.
If I try to go to an adult website, or even just a discord server with adult content, I need to upload my ID. And now there's numerous third parties who now are looking at my ID, and I have no idea if I can trust them with my info. Indeed, I probably can't, given how many of them have already been breached.
Of all the people, PornHub actually has a pretty good write-up on this (1) (2), and they refer to "device-based" age verification, where you verify your identity once to say, Google or whoever. Then your device proves your age. Fewer middlemen. One source of truth.
I am not against age verification. I am against the surveillance state.
(1) https://www.pornhub.com/blog/age-verification-in-the-news
(2) https://www.xbiz.com/news/281228/opinion-why-device-based-ag...
> they refer to "device-based" age verification, where you verify your identity once to say, Google or whoever. Then your device proves your age. Fewer middlemen. One source of truth.
This is still an absurdity. You don't need the device to prove the age of the user to the service, you need the service to provide the age restriction of the content to the device. Then the device knows if the user is an adult or a kid and thereby knows whether to display the content, and you don't need Google to know that.
3 replies →
IRL
> If I went to the store and asked for a pack of cigarettes
online
> and I have no idea if I can trust them with my info
Why did you trust how your ID was scanned (if carded)?
With security cameras present, where did that scanned data end up?
nit: the Discord ID verification hasn't rolled out yet has it?
1 reply →
> Is that really a non-sequitur though?
You have something (human communication) which is not intrinsically harmful -- indeed it is intrinsically necessary -- but has been made harmful on purpose. That is very much unlike those other things, where the harm is in their very nature and isn't prevented by the provider just not being a schmuck on purpose.
That makes age gating a farce, because kids need to be able to communicate with other people, but you would end up in one of these scenarios, each of which is inane: 1) Providers all put up age restrictions and meaningfully enforce them and then teenagers are totally prohibited from communicating over the internet. 2) Providers all put up "age restrictions" which teenagers bypass in ten seconds and the whole thing is a pointless fraud. 3) You try to separate places for kids from places for adults, but then either a) Adults prefer adult spaces where they're not censored, so they congregate there and those spaces get the network effect, and then teens have to sneak in even if they're not looking for adult content because that's where the bulk of all content is, or b) Nobody likes to show ID even if they're an adult so adults congregate in the least restrictively moderated space where they don't have to show ID, and that space gets the network effect. Then to the extent that they censor, they're censoring the adults which is the thing that wasn't supposed to happen, and to the extent that they don't censor, you have a "kid space" that contains adult content.
It's a trash fire specifically because there's a network effect, which is an aggregating force causing adults and kids to be in the same space so they can communicate with each other. Then the space with the network effect would either have to censor the adults even though they can't leave because of the network effect, or not censor the adults and then have adult content in the space the kids have to be because of the network effect.
The way you fix this is not by trying to separate the kids ad adults into separate networks, it's by tagging specific content so the client device can choose not to display adult content if they're a kid. Which also solves the privacy issue because you don't have to provide any ID to the service when the choice of what content to display happens on the client and the service is only tasked with identifying the content.