← Back to context

Comment by jonplackett

7 days ago

The reason is that this whole push for age verification is nothing to do with actually stopping kids seeing the content. If it was then this kind of solution would be being legislated for. It’s just about making everyone identifiable.

> The reason is that this whole push for age verification is nothing to do with actually stopping kids seeing the content.

The reason that mainstream politicians are pushing is because the public wants something done to protect their kids.

Are there likely to be bad actors pushing for it for nefarious reasons as well? Sure. Are the 'solutions' inadequate and often tech- and privacy-illiterate? Absolutely. Is the entire impulse to demand that government 'fix' this issue wrong? Maybe.

But the idea that this is all a smoke-screen from top to bottom needs to die. Not just because it's wrong, but because it's also unhelpful. If you wade into the debate saying "It's all a lie, this was never about the kids!" you're easily dismissed as a nut and an absolutist who doesn't appreciate that real people want their real kids to be protected.

  • Yep, and the tech companies had years to address these concerns and did not, so now the creaky gears of government regulation are turning. They (meaning YOU, a lot of tech company employees who are now outraged about this) could have headed this off years ago and provided a solution on their own terms.

  • The public largely wants whatever the media tells them to want and the media in turn tells people to want whatever the same bad actors want them to want.

  • So, why are those "real people" actually not willing to do their job? I am so pissed with parents who think the government is supposed to solve their own inability to raise a child.

    • Well for a start not all of them are very tech savvy, and we've built a world in which tech is essential to their day to day lives, including for their kids.

      If school demands the kids have a variety of devices to do their work, and they have no idea how to lock those down to exclude (for example) social media services that we know have been designed to be as addictive as possible, can you not see why they might want someone to intervene?

      (edit: Beyond that there are also tons of bad reasons, I'm not going to try and justify them. There are a lot of bad parents and just in general people who are not firing on all cylinders out there. And many of them absolutely love a government regulation to be brought in for just about anything.

      We can and should argue with these people and point out why they're wrong. But saying it's "nothing to do with actually stopping kids seeing the content" fails here too.)

      5 replies →

    • We expect every other consumer product/toy that kids are intended to use to be safe by default. This is like asking why parents shouldn't be responsible for testing all their kids toys for lead paint.

      Yet when it comes to internet/social media technology, it's suddenly a parenting failure if they don't pre-vet every platform and website and device before allowing their kids to use it.

      As a society, we collectively protect kids from stuff they aren't ready to handle. We don't let them gamble, or buy alcohol, cigarettes, or porn. For the most part, everyone buys in to this and parents can pretty much count on it. Are there exceptions, sure but they create scandals and consequences when they are discovered.

      But social media and content platforms didn't feel that they had any social obligations. They did not honor this societal convention to keep inappropriate content away from kids. And the top people at these companies actually don't let their own kids use the platforms, they know how harmful they are and they know about all the addictive hooks and dark patterns of engagement that are baked into them.

      1 reply →

If it is about making everyone identifiable how come California's version doesn't require providing any identifying information when setting it up on a child's device?

  • Because "making everyone identifiable" isn't an explicit design goal. Rather it is merely an implicit imperative (of Facebook et al, who are pushing these laws) that casts its shadow over the design. That shadow is what results in a design based around sending identifying information from the client to the server. Once this dynamic is normalized, servers will demand ever-more identifying information and evidence that it is correct.

    Note that this design does the exact opposite of giving parents control to protect their own children - rather it puts the ultimate decision making ability into the hands of corporate attorneys! For example, we can easily imagine a "Facebook4Kidz" site that does the bare legal minimum to avoid liability for addicting kids to dopamine drips, and no more. Client side software based around RTA headers would allow parents to choose to filter things like that out, whereas when the server is making the decision its anything goes as long as the corporate attorneys have given it the green light.

    • The California law makes it illegal for Facebook to demand anything more than just your age. That seems like the opposite of what they want.

      1 reply →

>If it was then this kind of solution would be being legislated for.

What's more likely a global conspiracy to get age verification passed to allow these unnamed groups to identify everyone for some unknown purpose or politicians just not understanding tech?

The way people try to pretend that there can't be any organic desire for these proposals is so bizarre and is a major cause for all these proposed solutions being so technically dubious. Refusal to recognize the problem means you won't be part of solving the problem.

  • You do realize that for whatever reason more and more people in government positions are on the path of authoritarian agendas? Its a pretty important topic right now. All of this privacy related stuff is happening in quick succession.

    I mean I cannot believe I have to post these, but here we go:

    https://www.politico.com/news/2025/09/13/california-advances...

    https://www.yahoo.com/news/articles/reddit-user-uncovers-beh...

    https://www.techdirt.com/tag/age-verification/

    • Your argument has two main flaws. First, it relies on an inherent connection between age verification and authoritarianism that is just taken for granted as true. Meta could easily be in favor of age verification because it reduces their liability and raises the barriers to entry for potential competitors. It doesn't inherently have to be authoritarianism.

      But more importantly even if that connection is true, your argument relies on the current proposals of age verification being the only way to satisfy the organic desire for protecting kids from the unfettered internet. OP gave an example that could be a compromise position that addresses the need and isn't authoritarian. Why can't you support that effort?

      8 replies →

  • The politicians that want to identify everyone capitalize on organic desire for these proposals in the form of fear-mongering and "Think of the children!"

    Citizens that want these laws are unthinking drones who don't want to raise their children, and instead want legislators to do it for them.

    Politicians that want these laws are the people who, ideally, want to track your every move online for a multitude of reasons, not least of which are censoring speech and controlling narratives.

    • >organic desire for these proposals

      Even if everything you said was true and there was a global conspiracy among the politicians, the tech crowd consistently denies and demeans these organic desires. We could cut the legs out from under these politicians if we listened to these people's concerns, considered actual solutions like OP did at the top of this thread, and turned these people into allies against those politicians. But instead we deny the actual desire to protect children and accuse them of either having ulterior motives or being sheep, turning them into permanent enemies thereby empowering those (hypothetically) conspiratorial politicians.

      3 replies →

Your lack of understanding why age verification does not constitute it being a conspiracy for another reason. There is a antiregulatory crowd that will invent any possible excuse to suggest tech companies shouldn't be accountable and we should just leave the Internet be. Those people make a lot of money exploiting everyone, as it happens, and they also pay for journalists to tell you that it's all about violating privacy or something. (The same folks will tell you opening up Android for third party AI tools would be a privacy and security risk, and not ask you to notice it would just cost Google a lot of money.)

We've been running essentially a social experiment on our kids for the past two decades and it has not gone well. Social media has had a toxic impact on kids. CSAM and child abuse are rampant, and most "privacy services" like disposable email and VPNs are the primary source. These are facts, whether you like them or not. There are, in fact, kids dying, school shootings, grooming, etc. which are all the direct result of our failure to regulate social media companies. Section 230 being the primary problem.

OS-level age verification is likely the best route, as private information can remain on a device in your control, and a browser then just needs to attest to websites whether or not the user should be allowed access, without conveying more detail. Obviously anyone with a Linux box will have ways around it, anything based in your own device will be exploitable in some way, but generally effective for the average child.

  • Any "verification" means unacceptable privacy violations.

    The best route is better parental controls, that are not enabled by default. Locking down the OS like ransomware until the user submits to age verification is the wrong approach, and what Apple did in the UK needs to be highly illegal.

    • > Any "verification" means unacceptable privacy violations.

      So I'm not necessarily arguing for age controls here, but purely on a technical level what do you think of schemes like Verifiable Credentials, which delegate verification to third parties that have already established your identity?

      In theory you can set up a system that works like this:

      1. User goes to restricted site and sets up an account

      2. Site forwards them on to a verification service with a request "IsOver18?"

      3. User selects their bank from a dropdown on the broker site

      4. Broker forwards them to the bank, with a request "IsOver18?"

      5. User logs in and selects "Sure, prove I am over 18 to this request"

      6. Bank sends a signed response to the broker "Yep"

      7. Broker verifies and sends its own signed response to the site "Yep"

      8. The site tags the account as "Over 18 Status verified"

      In this situation, the restricted site doesn't get anything other than a boolean answer from the broker. The broker can link a request to a given bank but doesn't get anything that gives away your identity. The bank knows your identity and that it has approved a request, but not necessarily where the request came from.

      12 replies →

  • > These are facts, whether you like them or not.

    [Citation Needed] As I understand it, the debate on whether social media is responsible for actual harms in kids is still open and ongoing. Social media has been found to do both harm and good for kids, and for some kids the good outweighs the harms [0]. Scientists are hoping to get some verification from the actual social experiments that we're conducting in the UK and Australia on this.

    Mandating OS-level age verification effectively means not allowing kids access to OSS platforms, a step way too far in my opinion. For instance, we would have to outlaw Steam Decks for kids.

    [0] https://pmc.ncbi.nlm.nih.gov/articles/PMC12165459/ "Social media and technological advancements’ impact on adolescent mental health is complex. It can be both a risk factor and a valuable support system. Excessive and problematic use has been linked to increased rates of MDD, anxiety, and mood dysregulation, while also exacerbating symptoms of ADHD, bipolar disorder, and BDD. Simultaneously, digital platforms provide opportunities for social connection, peer support, and mental health management, particularly for individuals with ASD and those seeking online mental health communities. The challenge is finding a balance. Although social media offers benefits, it also poses risks like addiction, negative social comparison, cyberbullying, and impulsive online behaviors"

    • > Social media has been found to do both harm and good for kids, and for some kids the good outweighs the harms

      Indeed. For example, the strongest evidence for harm shows that negative mental health is correlated with increasing social media use, but it's an important question of whether using social media more causes mental health problems, or mental health problems mean more social media use (or both, which would suggest a spiraling effect is important to look out for and prevent).

    • > Mandating OS-level age verification effectively means not allowing kids access to OSS platforms, a step way too far in my opinion. For instance, we would have to outlaw Steam Decks for kids.

      This is entirely false scare tactic nonsense, and you really need to look at where you sourced that idea and no longer use them as a reference point. There isn't even a concept of a method of doing this that would make that true, and certainly not in any of the implementations being considered in the US. The federal bill is called the Parents Decide Act, if it gives you some idea where the goal in decisionmaking is supposed to be.

      We have not just woefully bad parental controls, but in the name of privacy, modern platforms make it exceptionally hard to implement parental controls. What is being pushed here is largely a mandate that a system for parents to control what their kids can reach needs to exist and Internet companies need to support it.

      (Steam is, FWIW, probably one of the best actors in this regard already, Steam Family is incredibly nuanced in the features and tools it gives parents. I have a lot of gripes about Steam but this is not a place they will have difficulty complying with the law. Heck, Steam is better at parental controls than Nintendo and Disney).

      4 replies →

  • > any possible excuse to suggest tech companies shouldn't be accountable

    The entire impetus for these bills is for Facebook (the sponsor of these bills) to escape liability for how they're currently harming kids. Facebook's only goal here is to be receiving headers that say the user is over 18, so they can continue business as usual under the assertion that any users must be adults.

    • Then you recognize that the solution definitely does not require privacy invasion, since presumably Facebook does not want actual proof because they hope teenagers will get around it.

      That being said, the antiregulatory wonks are not all working for Facebook, and some are indeed manifestly just always opposed to any regulation at all no matter what harm is occurring.

      Bear in mind the alternative: Things like Discord collecting personal data to do verification at the website level. A push for a simple "user is over 18" header is incredibly preferable from a privacy standpoint and parents being able to control and monitor it themselves.

      1 reply →