← Back to context

Comment by heavyset_go

1 day ago

> Having said that, open-source zero-knowledge proofs are infinitely less evil (I refuse to say "better") than commercial cloud-based age monitoring baked into every OS

To be honest, I worry that the framing of this legislation and ZKP generally presents a false dichotomy, where second-option bias[1] prevails because of the draconian first option.

There's always another option: don't implement age verification laws at all.

App and website developers shouldn't be burdened with extra costly liability to make sure someone's kids don't read a curse word, parents can use the plethora of parental controls on the market if they're that worried.

[1] https://rationalwiki.org/wiki/Appeal_to_the_minority#Second-...

> App and website developers shouldn't be burdened with extra costly liability

Why not? Physical businesses have liability if they provide age restricted items to children. As far as I know, strip clubs are liable for who enters. Selling alcohol to a child carries personal criminal liability for store clerks. Assuming society decides to restrict something from children, why should online businesses be exempt?

On who should be responsible, parents or businesses, historically the answer has been both. Parents have decision making authority. Businesses must not undermine that by providing service to minors.

  • > Why not?

    This implies the creation of an infrastructure for the total surveillance of citizens, unlike age verification by physical businesses.

    • Spell it out: how do ID checks for specific services (where the laws I've read all require no records be retained with generally steep penalties) create an infrastructure for total surveillance? Can't sites just not keep records like they do in person and like the law mandates? Can't in-person businesses keep records and share that with whomever you're worried about?

      How do you reconcile porn sites as a line in the sand with things like banking or online real estate transactions or applying for an apartment already performing ID checks? The verification infrastructure is already in place. It's mundane. In fact the apartment one is probably more offensive because they'll likely make you do their online thing even if you could just walk in and show ID.

      4 replies →

  • > Physical businesses have liability if they provide age restricted items to children.

    Ok, suppose the strip club is the website, and the club's door is the OS.

    Would you fine the door's manufacturer for teens getting into the strip club?

    • Dueling physical analogies is never a productive way to resolve a conversation like this. It just diverts all useful energy into arguing about which analogy is more accurate but it doesn't matter because the people pushing this law don't care about any of them and aren't going to stop even if the entire internet manages to agree about an analogy. This needs to be fought directly.

      1 reply →

    • The OS is not the club's door. The OS is unrelated. The strip club needs to hire someone to work their door and check ID, not point at an unrelated third party. They should have liability to do so as the service provider.

  • > Physical businesses have liability if they provide age restricted items to children.

    These are often clear cut. They're physical controlled items. Tobacco, alcohol, guns, physical porn, and sometimes things like spray paint.

    The internet is not. There are people who believe discussions about human sexuality (ie "how do I know if I'm gay?") should be age restricted. There are people who believe any discussion about the human form should be age restricted. What about discussions of other forms of government? Plenty would prefer their children not be able to learn about communism from anywhere other than the Victims of Communism Memorial Foundation.

    The landscape of age restricting information is infinitely more complex than age restricting physical items. This complexity enables certain actors to censor wide swaths of information due to a provider's fear of liability.

    This is closer to a law that says "if a store sells an item that is used to damage property whatsoever, they are liable", so now the store owner must fear the full can of soda could be used to break a window.

    • That's not a problem of age verification. That's a problem of what qualifies for liability and what is protected speech, and the same questions do exist in physical space (e.g. Barnes and Noble carrying books with adult themes/language).

      So again, assuming we have decided to restrict something (and there are clear lines online too like commercial porn sites, or sites that sell alcohol (which already comes with an ID check!)), why isn't liability for online providers the obvious conclusion?

      1 reply →

    • >Plenty would prefer their children not be able to learn about communism

      Plenty of people would prefer that children not learn about scientology from pro-scientology cultists too. It's not that they can't know about scientology (they probably should, in fact, because knowledge can have an immunizing effect against cults)...

      And it's not that they can't know about communism (they probably should, in fact, because knowledge can have an immunizing effect against cults)...

      1 reply →

  • > Physical businesses

    Physical businesses nominally aren't selling their items to people across state or country borders.

    Of course, we threw that out when we decided people could buy things online. How'd that tax loophole turn out?

    • But when they do, federal law requires age verification (at least with e.g. alcohol).

      It turned out we pretty much closed the tax loophole. I don't remember an online purchase with no sales tax since the mid 00s.

  • For one thing, it's fairly uncommon for children to purchase operating systems. As long as there is one major operating system with age verification, parents (or teachers) who want software restrictions on their children can simply provide that one. The existence of operating systems without age verification does not actually create a problem as long as the parents are at least somewhat aware of what is installed at device level on their child's computer, which is an awful lot easier than policing every single webpage the kid visits.

    • So I agree that operating systems and device developers should not be liable. That's putting a burden on an unrelated party and a bad solution that does possibly lead to locked down computing. I meant that liability should lie with service providers. e.g. porn distributors. The people actually dealing in the restricted item. As a role of thumb, we shouldn't make their externalities other people's problems (assuming we agree that their product being given to children is a problem externality).

App and website developers shouldn't be burdened with extra costly liability to make sure someone's kids don't read a curse word, parents can use the plethora of parental controls on the market if they're that worried.

App and website operators should add one static header. [1] That's it, nothing more. Site operators could do this in their sleep.

User-agents must look for said header [1] and activate parental controls if they were enabled on the device by a parent. That's it, nothing more. No signalling to a website, no leaking data, no tracking, no identifying. A junior developer could do this in their sleep.

None of this will happen of course as bribery (lobbying) is involved.

[1] - https://news.ycombinator.com/item?id=46152074

Practically, instead of requiring that sites verify age, require that they serve adult content with standardized headers. Devices can then be marketed as "child-safe" which refuse to display content with such headers.

ZKP methods are just as draconian as they rely on locking down end user devices with remote attestation, which is why they're being pushed by Google ("Safety" net, WEI, etc).

The real answer to the problem is for websites/appstores to publish tags that are legally binding assertions of age appropriateness, and then browsers/systems can be configured to use those tags to only show appropriate content to their intended user.

This also gives parents the ability to additionally decide other types of websites are not suitable for their children, rather than trusting websites themselves to make that decision within the context of their regulatory capture. For example imagine a Facebook4Kidz website that vets posts as being age appropriate, but does nothing to alleviate the dopamine drip mechanics.

There has been a market failure here, so it wouldn't be unreasonable for legislation to dictate that large websites must implement these tags (over a certain number of users), and that popular mobile operating systems / browsers implement the parental controls functionality. But there would be no need to cover all websites and operating systems - untagged websites fail as unavailable in the kid-appropriate browsers, and parents would only give devices with parental controls enabled to their kids.

  • > The real answer to the problem is for websites/appstores to publish tags that are legally binding assertions of age appropriateness, and then browsers/systems can be configured to use those tags to only show appropriate content to their intended user.

    Agreed, recycling a comment: on reasons for it to be that way:

    ___________

    1. Most of the dollar costs of making it all happen will be paid by the people who actually need/use the feature.

    2. No toxic Orwellian panopticon.

    3. Key enforcement falls into a realm non-technical parents can actually observe and act upon: What device is little Timmy holding?

    4. Every site in the world will not need a monthly update to handle Elbonia's rite of manhood on the 17th lunar year to make it permitted to see bare ankles. Instead, parents of that region/religion can download their own damn plugin.

    • Good list of more reasons! I focused on what I consider the two most important.

      To expand on your #3, it also gives parents a way to have different policies on different devices for the same child. Perhaps absolutely no social media on their phone (which is always drawing them, and can be used in private when they're supposed to be doing something else), but allowing it on a desktop computer in an observable area (ie accountability).

      The way the proposed legislation is made, once companies have cleared the hurdle of what the law requires, parents are then left up to the mercy of whatever the companies deem appropriate for their kids. Which isn't terribly surprising for regulatory capture legislation! But since it's branded with protecting kids and helping parents, we need to be shouting about all the ways it actually undermines those goals.

> There's always another option: don't implement age verification laws at all.

Where do you go to vote for this option?

The concern is ubiquitous all-pervasive surveillance, control, and manipulation of algorithmical social media and its objective consequences for child development and well-being. Not "kids reading a bad word". Disagree all you want, but don't twist the premise.

Surely you can find a rationalwiki article for your fallacy too.

  • If you want to avoid all pervasive surveillance, it might be wise to not mandate all pervasive surveillance in the OS by law.

    In fact, I suspect adults, and not just children, would also appreciate it if the pervasive surveillance was simply banned, instead of trying to age gate it. Why should bad actors be allowed to prey on adults?

  • You mean the same social media companies that want this legislation and wrote it themselves? The same legislation that introduces more surveillance and tracking for everyone, including kids?

    Also, I heard the same thing about video games, TV shows, D&D, texting and even youth novels. It's yet another moral panic.

    From the Guardian[1]:

    > Social media time does not increase teenagers’ mental health problems – study

    > Research finds no evidence heavier social media use or more gaming increases symptoms of anxiety or depression

    > Screen time spent gaming or on social media does not cause mental health problems in teenagers, according to a large-scale study.

    > With ministers in the UK considering whether to follow Australia’s example by banning social media use for under-16s, the findings challenge concerns that long periods spent gaming or scrolling TikTok or Instagram are driving an increase in teenagers’ depression, anxiety and other mental health conditions.

    > Researchers at the University of Manchester followed 25,000 11- to 14-year-olds over three school years, tracking their self-reported social media habits, gaming frequency and emotional difficulties to find out whether technology use genuinely predicted later mental health difficulties.

    From Nature[2]:

    > Time spent on social media among the least influential factors in adolescent mental health

    From the Atlantic[3] with citations in the article:

    > The Panic Over Smartphones Doesn’t Help Teens, It may only make things worse.

    > I am a developmental psychologist[4], and for the past 20 years, I have worked to identify how children develop mental illnesses. Since 2008, I have studied 10-to-15-year-olds using their mobile phones, with the goal of testing how a wide range of their daily experiences, including their digital-technology use, influences their mental health. My colleagues and I have repeatedly failed to find[5] compelling support for the claim that digital-technology use is a major contributor to adolescent depression and other mental-health symptoms.

    > Many other researchers have found the same[6]. In fact, a recent[6] study and a review of research[7] on social media and depression concluded that social media is one of the least influential factors in predicting adolescents’ mental health. The most influential factors include a family history of mental disorder; early exposure to adversity, such as violence and discrimination; and school- and family-related stressors, among others. At the end of last year, the National Academies of Sciences, Engineering, and Medicine released a report[8] concluding, “Available research that links social media to health shows small effects and weak associations, which may be influenced by a combination of good and bad experiences. Contrary to the current cultural narrative that social media is universally harmful to adolescents, the reality is more complicated.”

    [1] https://www.theguardian.com/media/2026/jan/14/social-media-t...

    [2] https://www.nature.com/articles/s44220-023-00063-7

    [3] https://www.theatlantic.com/technology/archive/2024/05/candi...

    [4] https://adaptlab.org/

    [5] https://pubmed.ncbi.nlm.nih.gov/31929951/

    [6] https://www.nature.com/articles/s44220-023-00063-7#:~:text=G...

    [7] https://pubmed.ncbi.nlm.nih.gov/32734903/

    [8] https://nap.nationalacademies.org/resource/27396/Highlights_...