Comment by ohhnoodont

1 day ago

I agree. I feel the flow of having browsers send some flag to sites is the most privacy-preserving approach to this whole topic. The system owner creates a “child” account that has the flag set by the OS and prevents the execution of unsanctioned software.

This puts the responsibility back on parents to do the bare minimum required in moderating their child’s activities.

What would be even more privacy preserving would be to mandate sites to send age appropriateness headers (mainstream porn sites already do this voluntarily).

Possibly it could be further mandated that the OS collect relevant rating information for each account and provide APIs with which browsers and other software could implement filtering.

And possibly it could be further mandated that web browsers adopt support for this filtering standard.

And if you want a really crazy idea you could pass a law mandating that parents configure parental controls on devices of children under (say) 12 and attach civil penalties for repeated failure to do so.

There's never any need for information about the user to be sent off to third parties, nor should we adopt schemes that will inevitably provide ammo for those advocating attested digital platforms.

  • Yes this is a really simple fix. The first line if your post says it all. If they really wanted to protect children, you would put the responsibility on the services on the other end. This is about mass surveillance or disadvantaging open source solutions

  • I think you would find widespread support from the various websites out there for this. Most porn websites today voluntarily implement some type of mechanism that advertises them as not for children.

    • The issue is how does the browser know the age bracket of the user in question so it knows to not load content with those headers? The API this bill mandates is the missing half to make those headers actually useful without specialized browsers/3rd party plugins.

  • So does Google send a header for each search result when you look up "Ron Jeremy" so that some results get hidden, or does the browser just block the whole page?

    Sending all the "bad" data to the client and hoping the client does the right thing outs a lot of complexity on the client. A lot easier to know things are working if the bad data doesn't ever get sent to the client - it can't display what it didn't get.

    • Google would send a header that it is appropriate for all ages (I'm not sure how the safe search toggle would interact with this, the idea is just a rough sketch after all).

      When you click on a search result, you load a new page on a different website. The new page would once again come with a header indicating the content rating. This header would be attached to all pages by law. It would be sent every time you load any page.

      Assuming that the actual problem here is the difficulty of implementing reliable content filtering (ala parental controls) then the minimally invasive solution is to institute an open standard that enables any piece of software to easily implement the desired functionality. You can then further pass legislation requiring (for example) that certain classes of website (ex social media) include an indication of this as part of the header.

      Concretely, an example header might look like "X-Content-Filter: 13,social-media". If it were legally mandated that all websites send such it would become trivially easy to implement filtering on device since you could simply block any site that failed to send it.

      > A lot easier to know things are working if ...

      Which is followed by wanting an attested OS (to make sure the value is reliably reported), followed by a process for a third party to verify a government issued ID (since the user might have lied), followed by ...

      It's entirely the wrong mentality. It isn't necessary for solving the actual problem, it mandates the leaking of personal data, and it opens an entire can of worms regarding verification of reported fact.

If browsers are going to send flags, they should only send a flag if its a minor. Otherwise is another point of tracking data that can be used for fingerprinting.

  • If you send a flag ever, then absence of a flag is also fingerprinting surface.

    If you imagine a world where you have a header, Accepts-Adult-Content, which takes a boolean value: you essentially have three possibilities: ?0, ?1, and absent.

    How useful of a tracking signal those three options provide depends on what else is being sent —

    For example, if someone is stuffing a huge amount of fingerprinting data into the User-Agent string, then this header probably doesn’t actually change anything of the posture.

    As another example, if you’re in a regular browser with much of the UA string frozen, and ignoring all other headers for now, then it depends on how likely the users with that UA string to have each option: if all users of that browser always send ?0 (if they indicate themselves to be a minor) or ?1 (if they indicate themselves to be an adult or decline to indicate anything), then a request with that UA and it absent is significantly more noteworthy — because the browser wouldn’t send it — and more likely to be meaningful fingerprinting surface.

    That said, adding any of this as passive fingerprinting surface seems like an idea unlikely to be worthwhile.

    If you want even a weak signal, it would be much better to require user interaction for it.