← Back to context

Comment by SoftTalker

7 days ago

Well why haven't all the big tech companies done it then?

They have only themselves to blame. They had years to fix the problem of inappropriate content being delivered to kids and their response was sticking their fingers in their ears and saying "blah blah blah parenting blah blah blah"

And it really should be the opposite. Assume content is not kid-safe by default, and allow sites to declare if they have some other rating.

The reason is that this whole push for age verification is nothing to do with actually stopping kids seeing the content. If it was then this kind of solution would be being legislated for. It’s just about making everyone identifiable.

  • > The reason is that this whole push for age verification is nothing to do with actually stopping kids seeing the content.

    The reason that mainstream politicians are pushing is because the public wants something done to protect their kids.

    Are there likely to be bad actors pushing for it for nefarious reasons as well? Sure. Are the 'solutions' inadequate and often tech- and privacy-illiterate? Absolutely. Is the entire impulse to demand that government 'fix' this issue wrong? Maybe.

    But the idea that this is all a smoke-screen from top to bottom needs to die. Not just because it's wrong, but because it's also unhelpful. If you wade into the debate saying "It's all a lie, this was never about the kids!" you're easily dismissed as a nut and an absolutist who doesn't appreciate that real people want their real kids to be protected.

    • Yep, and the tech companies had years to address these concerns and did not, so now the creaky gears of government regulation are turning. They (meaning YOU, a lot of tech company employees who are now outraged about this) could have headed this off years ago and provided a solution on their own terms.

    • The public largely wants whatever the media tells them to want and the media in turn tells people to want whatever the same bad actors want them to want.

    • So, why are those "real people" actually not willing to do their job? I am so pissed with parents who think the government is supposed to solve their own inability to raise a child.

      8 replies →

  • If it is about making everyone identifiable how come California's version doesn't require providing any identifying information when setting it up on a child's device?

    • Because "making everyone identifiable" isn't an explicit design goal. Rather it is merely an implicit imperative (of Facebook et al, who are pushing these laws) that casts its shadow over the design. That shadow is what results in a design based around sending identifying information from the client to the server. Once this dynamic is normalized, servers will demand ever-more identifying information and evidence that it is correct.

      Note that this design does the exact opposite of giving parents control to protect their own children - rather it puts the ultimate decision making ability into the hands of corporate attorneys! For example, we can easily imagine a "Facebook4Kidz" site that does the bare legal minimum to avoid liability for addicting kids to dopamine drips, and no more. Client side software based around RTA headers would allow parents to choose to filter things like that out, whereas when the server is making the decision its anything goes as long as the corporate attorneys have given it the green light.

      2 replies →

  • >If it was then this kind of solution would be being legislated for.

    What's more likely a global conspiracy to get age verification passed to allow these unnamed groups to identify everyone for some unknown purpose or politicians just not understanding tech?

    The way people try to pretend that there can't be any organic desire for these proposals is so bizarre and is a major cause for all these proposed solutions being so technically dubious. Refusal to recognize the problem means you won't be part of solving the problem.

  • Your lack of understanding why age verification does not constitute it being a conspiracy for another reason. There is a antiregulatory crowd that will invent any possible excuse to suggest tech companies shouldn't be accountable and we should just leave the Internet be. Those people make a lot of money exploiting everyone, as it happens, and they also pay for journalists to tell you that it's all about violating privacy or something. (The same folks will tell you opening up Android for third party AI tools would be a privacy and security risk, and not ask you to notice it would just cost Google a lot of money.)

    We've been running essentially a social experiment on our kids for the past two decades and it has not gone well. Social media has had a toxic impact on kids. CSAM and child abuse are rampant, and most "privacy services" like disposable email and VPNs are the primary source. These are facts, whether you like them or not. There are, in fact, kids dying, school shootings, grooming, etc. which are all the direct result of our failure to regulate social media companies. Section 230 being the primary problem.

    OS-level age verification is likely the best route, as private information can remain on a device in your control, and a browser then just needs to attest to websites whether or not the user should be allowed access, without conveying more detail. Obviously anyone with a Linux box will have ways around it, anything based in your own device will be exploitable in some way, but generally effective for the average child.

    • Any "verification" means unacceptable privacy violations.

      The best route is better parental controls, that are not enabled by default. Locking down the OS like ransomware until the user submits to age verification is the wrong approach, and what Apple did in the UK needs to be highly illegal.

      13 replies →

    • > These are facts, whether you like them or not.

      [Citation Needed] As I understand it, the debate on whether social media is responsible for actual harms in kids is still open and ongoing. Social media has been found to do both harm and good for kids, and for some kids the good outweighs the harms [0]. Scientists are hoping to get some verification from the actual social experiments that we're conducting in the UK and Australia on this.

      Mandating OS-level age verification effectively means not allowing kids access to OSS platforms, a step way too far in my opinion. For instance, we would have to outlaw Steam Decks for kids.

      [0] https://pmc.ncbi.nlm.nih.gov/articles/PMC12165459/ "Social media and technological advancements’ impact on adolescent mental health is complex. It can be both a risk factor and a valuable support system. Excessive and problematic use has been linked to increased rates of MDD, anxiety, and mood dysregulation, while also exacerbating symptoms of ADHD, bipolar disorder, and BDD. Simultaneously, digital platforms provide opportunities for social connection, peer support, and mental health management, particularly for individuals with ASD and those seeking online mental health communities. The challenge is finding a balance. Although social media offers benefits, it also poses risks like addiction, negative social comparison, cyberbullying, and impulsive online behaviors"

      6 replies →

    • > any possible excuse to suggest tech companies shouldn't be accountable

      The entire impetus for these bills is for Facebook (the sponsor of these bills) to escape liability for how they're currently harming kids. Facebook's only goal here is to be receiving headers that say the user is over 18, so they can continue business as usual under the assertion that any users must be adults.

      2 replies →

Because it isn't in their financial interest. They've either done nothing or actively lobbied for these ID laws. You can plausibly explain it in a number of ways, including regulatory capture, deanonimization, spam reduction, etc.

The tech companies are the ones lobbing for age verification.

The entire point of this scheme is mass surveillance and shifting responsibility away from big tech companies. It has nothing at all to do with "protecting" kids. Preventing kids from accessing adult material is not even remotely a goal, it is a pretext. Just like every other "think of the children" argument.

Because you can't have a tech company offering third party identity verification solutions if you just go with something like an RTA header.