Comment by ocdtrekkie

7 days ago

Your lack of understanding why age verification does not constitute it being a conspiracy for another reason. There is a antiregulatory crowd that will invent any possible excuse to suggest tech companies shouldn't be accountable and we should just leave the Internet be. Those people make a lot of money exploiting everyone, as it happens, and they also pay for journalists to tell you that it's all about violating privacy or something. (The same folks will tell you opening up Android for third party AI tools would be a privacy and security risk, and not ask you to notice it would just cost Google a lot of money.)

We've been running essentially a social experiment on our kids for the past two decades and it has not gone well. Social media has had a toxic impact on kids. CSAM and child abuse are rampant, and most "privacy services" like disposable email and VPNs are the primary source. These are facts, whether you like them or not. There are, in fact, kids dying, school shootings, grooming, etc. which are all the direct result of our failure to regulate social media companies. Section 230 being the primary problem.

OS-level age verification is likely the best route, as private information can remain on a device in your control, and a browser then just needs to attest to websites whether or not the user should be allowed access, without conveying more detail. Obviously anyone with a Linux box will have ways around it, anything based in your own device will be exploitable in some way, but generally effective for the average child.

Any "verification" means unacceptable privacy violations.

The best route is better parental controls, that are not enabled by default. Locking down the OS like ransomware until the user submits to age verification is the wrong approach, and what Apple did in the UK needs to be highly illegal.

  • > Any "verification" means unacceptable privacy violations.

    So I'm not necessarily arguing for age controls here, but purely on a technical level what do you think of schemes like Verifiable Credentials, which delegate verification to third parties that have already established your identity?

    In theory you can set up a system that works like this:

    1. User goes to restricted site and sets up an account

    2. Site forwards them on to a verification service with a request "IsOver18?"

    3. User selects their bank from a dropdown on the broker site

    4. Broker forwards them to the bank, with a request "IsOver18?"

    5. User logs in and selects "Sure, prove I am over 18 to this request"

    6. Bank sends a signed response to the broker "Yep"

    7. Broker verifies and sends its own signed response to the site "Yep"

    8. The site tags the account as "Over 18 Status verified"

    In this situation, the restricted site doesn't get anything other than a boolean answer from the broker. The broker can link a request to a given bank but doesn't get anything that gives away your identity. The bank knows your identity and that it has approved a request, but not necessarily where the request came from.

    • Verification broker tracks sites which make requests and records it attached to personal data. Site either sells or leaks personal data along with history of all sites visited which require age verification.

      Also your solution requires a bank account, not something everyone has. Many do, but not all. Also the bank may not know "which" site you are visiting, but it does now know you are visiting sites which require age verification and how often.

      1 reply →

    • User installs a browser extension which forwards the request to everyoneisover18.com, owner of that site has a script set up to log into their bank and pass the verification challenge

      9 replies →

> These are facts, whether you like them or not.

[Citation Needed] As I understand it, the debate on whether social media is responsible for actual harms in kids is still open and ongoing. Social media has been found to do both harm and good for kids, and for some kids the good outweighs the harms [0]. Scientists are hoping to get some verification from the actual social experiments that we're conducting in the UK and Australia on this.

Mandating OS-level age verification effectively means not allowing kids access to OSS platforms, a step way too far in my opinion. For instance, we would have to outlaw Steam Decks for kids.

[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC12165459/ "Social media and technological advancements’ impact on adolescent mental health is complex. It can be both a risk factor and a valuable support system. Excessive and problematic use has been linked to increased rates of MDD, anxiety, and mood dysregulation, while also exacerbating symptoms of ADHD, bipolar disorder, and BDD. Simultaneously, digital platforms provide opportunities for social connection, peer support, and mental health management, particularly for individuals with ASD and those seeking online mental health communities. The challenge is finding a balance. Although social media offers benefits, it also poses risks like addiction, negative social comparison, cyberbullying, and impulsive online behaviors"

  • > Social media has been found to do both harm and good for kids, and for some kids the good outweighs the harms

    Indeed. For example, the strongest evidence for harm shows that negative mental health is correlated with increasing social media use, but it's an important question of whether using social media more causes mental health problems, or mental health problems mean more social media use (or both, which would suggest a spiraling effect is important to look out for and prevent).

  • > Mandating OS-level age verification effectively means not allowing kids access to OSS platforms, a step way too far in my opinion. For instance, we would have to outlaw Steam Decks for kids.

    This is entirely false scare tactic nonsense, and you really need to look at where you sourced that idea and no longer use them as a reference point. There isn't even a concept of a method of doing this that would make that true, and certainly not in any of the implementations being considered in the US. The federal bill is called the Parents Decide Act, if it gives you some idea where the goal in decisionmaking is supposed to be.

    We have not just woefully bad parental controls, but in the name of privacy, modern platforms make it exceptionally hard to implement parental controls. What is being pushed here is largely a mandate that a system for parents to control what their kids can reach needs to exist and Internet companies need to support it.

    (Steam is, FWIW, probably one of the best actors in this regard already, Steam Family is incredibly nuanced in the features and tools it gives parents. I have a lot of gripes about Steam but this is not a place they will have difficulty complying with the law. Heck, Steam is better at parental controls than Nintendo and Disney).

    • > There isn't even a concept of a method of doing this that would make that true, and certainly not in any of the implementations being considered in the US. The federal bill is called the Parents Decide Act, if it gives you some idea where the goal in decisionmaking is supposed to be.

      The Parents Decide Act (PDA) goes considerably farther than superficially similar sounding laws like California's.

      The California law requires that an OS allow the parent or guarding to associate an age or birthdate with the account when setting up a child's account on a device that will primarily be used for the child. It does not require any verification of the age information that the parent provides.

      The PDA requires that the birthdate be provided for anyone who has an account on the device, and leaves much of the details up to the Federal Trade Commission to work out in the first 180 days after is passed. The wording of the list of things the Commission is to do suggests that the OS is supposed to actually verify age information, rather than just accept whatever a parent enters when setting up the child's device and account, and that it also has to verify that it will require the birthdate of the parent and verify that.

    • A Steam Deck is just an Arch Linux box. There is, intentionally by by design, no method of securing it against its user. Anyone with root access can change anything on it. There is no method of enforcing an age verification scheme on it in a way that cannot be removed or altered by a sufficiently bright and motivated teenager.

      1 reply →

    • The California bill, which is not called the Parents Decide Act, lets parents decide. The federal Parents Decide Act doesn't say whether parents can decide or not - it says a commission shall decide whether parents shall be able to decide, and we can predict what that commission will decide.

> any possible excuse to suggest tech companies shouldn't be accountable

The entire impetus for these bills is for Facebook (the sponsor of these bills) to escape liability for how they're currently harming kids. Facebook's only goal here is to be receiving headers that say the user is over 18, so they can continue business as usual under the assertion that any users must be adults.

  • Then you recognize that the solution definitely does not require privacy invasion, since presumably Facebook does not want actual proof because they hope teenagers will get around it.

    That being said, the antiregulatory wonks are not all working for Facebook, and some are indeed manifestly just always opposed to any regulation at all no matter what harm is occurring.

    Bear in mind the alternative: Things like Discord collecting personal data to do verification at the website level. A push for a simple "user is over 18" header is incredibly preferable from a privacy standpoint and parents being able to control and monitor it themselves.

    • This legislation does not require it out of the gate, but it sets up the precedent and the incentives such that it will eventually be required down the line. That's the problem with anything that gives more power, and the expectation of even more power, to the server (ie to big tech).

      FWIW I personally would be supportive of legislation where the data flow went the proper way of server->client, for the user-agent to decide. Consider: Any website over a certain size must publish an appropriate set of well known tags asserting whether its content is suitable for kids of certain ages, has social aspects, the type of content, etc. Any device preloaded with an operating system over a certain marketshare must include parental control software that uses tags, as an option in the set up flow. The parental control software "fails closed" and doesn't display websites without tags. The long tails of the open web, bespoke devices, new OSes, etc remain completely unaffected.