Comment by Bender

7 days ago

The one and only method I will participate in is server operators setting a RTA header [1] for URL's that may contain adult or user-generated or user-contributed content and the clients having the option to detect that header and trigger parental controls if they are enabled by the device owner. That should suffice to protect most small children. Teens will always get around anything anyone implements as they are already doing. RTA headers are not perfect, nothing is nor ever will be but there is absolutely no tracking or leaking data involved. Governments could easily hire contractors to scan sites for the lack of that header and fine sites not participating into oblivion.

I a small server operator and a client of the internet will not participate in any other methods period, full-stop. Make simple logical and rational laws around RTA headers and I will participate. Many sites already voluntarily add this header. It is trivial to implement. Many questions and a lengthy discussion occurred here [1]. I doubt my little private and semi-private sites would be noticed but one day it may come to that at which point it's back into semi-private Tinc open source VPN meshes for my friends and I.

[1] - https://news.ycombinator.com/item?id=46152074

This is exactly the way it should be done. Device with parental controls enabled disables content client-side when the header is detected. As far as I can tell, it's a global optimum, all trade-offs considered.

  • Well why haven't all the big tech companies done it then?

    They have only themselves to blame. They had years to fix the problem of inappropriate content being delivered to kids and their response was sticking their fingers in their ears and saying "blah blah blah parenting blah blah blah"

    And it really should be the opposite. Assume content is not kid-safe by default, and allow sites to declare if they have some other rating.

    • The reason is that this whole push for age verification is nothing to do with actually stopping kids seeing the content. If it was then this kind of solution would be being legislated for. It’s just about making everyone identifiable.

      59 replies →

    • Because it isn't in their financial interest. They've either done nothing or actively lobbied for these ID laws. You can plausibly explain it in a number of ways, including regulatory capture, deanonimization, spam reduction, etc.

    • The tech companies are the ones lobbing for age verification.

      The entire point of this scheme is mass surveillance and shifting responsibility away from big tech companies. It has nothing at all to do with "protecting" kids. Preventing kids from accessing adult material is not even remotely a goal, it is a pretext. Just like every other "think of the children" argument.

    • Because you can't have a tech company offering third party identity verification solutions if you just go with something like an RTA header.

Or could have a header saying this is not adult-only content, and a parentally-controlled device will block things that don't participate.

  • That's a good idea. There could be two headers, the existing RTA header that adult sites use today [1] and another static header that explicitly states there shall be no adult content.

    [1] - https://www.shodan.io/search?query=RTA-5042-1996-1400-1577-R... [THESE ARE ADULT SITES, NSFW]

    • What is adult content? I know parents who have no problem with their kids seeing porn. I know parents who give their kids a beer. I know parents who take their kids to violent movies. I used to know parents who will give their kids cigarettes. Most parents I know will disagree with their kids doing one of the above. I know songs that were played on the radio in 1960 that would not be allowed today, even though today we allow some swearing on the radio.

      33 replies →

  • Yes, the RTA header was primarily a solution specific to porn sites. The broader problem is that parental controls don't have reliable standardized signals to filter on which has led to the current nonfunctional mess.

    So ideally you want a standardized header that can be used to self classify content into any number of arbitrary and potentially overlapping categories. The presence of that header should then be legally mandated with specific categories required to be marked as either present or absent.

    So for example HN might be "user generated T, social media T, porn F" or similar with operators being free to include arbitrary additional categories (but we know from experience that most of them won't).

    While this would be required by law, I imagine browser vendors might also drop support to load sites that don't send the header in order to coerce global compliance.

    • Just an opinion which I know is not super valuable but categories won't help with most sites. Anything that permits user contributed content can become any rating at any minute unless all content would require approval by a moderator before anyone could see it. A few forums support that concept but it requires a proportionate number of moderators or I suppose a very accurate and reliable AI moderator if that is even a thing. I think it's easier and probably legally safer to just tag anything that is not guaranteed to be 100% child safe at all times as adult and let parents decide if they with to approve-list the site in parental controls.

  • I always love seeing pros and cons of whitelist vs blacklist sorts of strategies in different scenarios.

    • Yeah, and this is a good one. Blacklist is less likely to be ignored by parents. Both have risks of corps doing CYA strats, but less so with the blacklist. Whitelist has the advantage of being more feasible without an actual law, and also better matching how parenting works. Generally kids are given whitelists irl.

An outstanding idea. Those lobbying for age verification hate it though, because they want to be the arbiters of age, and all that juicy PII that they can analyze and resell.

  • I'm not so sure. I think the push is from the government actually. But companies are not exactly opposed to it. Quite the contrary. Big corporations see compliance as a moat. Tobacco companies supported stricter regulations on tobacco advertisements, because they had the deep pockets required to follow the changing laws. Mr. Altman is all-in on AI regulation, because it will mire down competitors while OpnAI has already "slipped past the wire" and done all their training pre-crackdown. When given a choice between regulating their industry (platforms and operating systems) vs regulating someone else's (porn sites and the like) they'll always helpfully "volunteer" to be the first to be regulated. It's just good business.

    • "The government" is the same as those lobbying the government. The people in the government get paid to push it, so they push it, and get paid more when it goes through, by the people who want that PII to analyze.

  • What PII? They get a boolean "old enough"

    • Think about how they validate how old you are. Meta and Google, who are lobbying in support of this legislation,will force you to sign up with your real ID, and be the arbiter for questions like “are you old enough for this website”. For every request that you make through some third-party website that needs to know your age, Meta and Google will know where you tried to login, and for which content. They will then resell this data to the highest bidder. Additionally, through all their ad networks and tracking, they will follow your session and have verified ID to match your entire browsing history. This is the end of anonymity and privacy on the Internet.

      1 reply →

    • Age verification companies literally require your personal information to function. They don't want you to be able to send them a simple boolean over Tor in exchange for whatever trackable token you need to access something.

    • If technically competent people specify and build this system, sure. But it’ll be specified by complete idiot politicians, influenced by Google and Meta, who 100% DO want to know your government name, DOB, etc., so we’ll end up flashing our IDs at the camera, turning around to be scanned, etc. The platform owners will tell us they “deeply care about our privacy.”

“solutions” like this presume that age verification/gating is the goal. it’s not. it’s a cover story.

the goal is eradicating anonymous publishing. the goal is making strong government ID mandatory to use the internet.

any privacy preserving age gating system is useless toward that goal, so it is irrelevant.

Interesting, I've never heard of this. I see an example that involves an HTTP response header "Rating: RTA-5042-1996-1400-1577-RTA". But does this actually still get used by parental controls? I didn't run into a lot of documentation about this, including on the very badly designed RTA web site https://www.rtalabel.org/

For anyone curious about the value, the numbering on the value is just a fixed number everybody decided to use for some reason that isn't clear to me.

I would deeply prefer to do it this way, but my goodness the RTA org needs a serious brush up of their web site and information on how to use this.

  • But does this actually still get used by parental controls?

    Some parental control applications will look for it but it is not yet legislated to be mandatory on a majority of user-agents.

    All I am suggesting is we legislate the header to be added to URL's that may contain material not appropriate for small children and mandate the majority of user-agents the ones that are default installed on tablets and operating systems look for said header to trigger optional parental controls. Child accounts created by parents on the device should not be able to install alternate user-agents or bypass the controls (at least not easily). Parents should be guided through this on device setup.

    Indeed their site is old and rarely touched. The ideas and concepts have not changed. It really could just be a static text site formatted in ways that law makers are used to or someone could modernize it.

Back in the late 90s or so, there was a proposal to have sites voluntarily set an age header, so parents/employers/etc could use to block the site if they wish. People said it would never work, because adult sites had a financial incentive not to opt in to reduce their own traffic.

  • What, in the same way movie studios wouldn't comply with the Hayes Code, or comic book publishers wouldn't comply with the CCA, or games publishers wouldn't comply with the ESRB? The financial incentive is to police yourself, because government policing is much, much worse.

  • You’d think that one could simply block sites that don’t have the age header set on child computers. This may block kids from hobbyist sites that don’t bother to set their headers as kid-friendly, but commercial sites would surely set their headers properly. Over time sending proper rating headers would become more normalized if they were in common use.

    This still isn’t perfect, as it creates an incentive for legislators to criminalize improper age header settings and legislate what is considered kid-appropriate. But it’s still better than this age verification crap.

    • An age header is not the answer. Why should a site have to decide what content is appropriate for a 18 year old and what content is not? Who is qualified to make that decision for every 17 year old in the world? Do they know my 17 year old? Do they know the rules in our home? What if I'm OK with my kid seeing sex-education stuff, but some lawyer at Wikipedia just decides to tag sex ed articles as 18+? Now I have a shitty choice: Open up the floodgates of "18+" to my kid, do it temporarily while the kid browses the sex ed sites, or not let the kid browse them.

      Letting a company or government decide what's appropriate for what exact specific age is fraught with problems.

      2 replies →

    • Yes, that's how parental filters already work. They use a combination of rta tags and external data to block pages. Even works with Google safe search, firewall devices, etc. The rta ecosystem is already built out and viable.

      1 reply →

  • What I am suggesting could address most of that. If they do not participate they get fined. The government loves to fine companies. This assumes they put enough "teeth" into a law that prevents companies from accepting fines as the cost of doing business. This would also require legislation that could block sites that operate from countries that do not cooperate with US laws. Mandatory subscriptions to BGP AS path filters, CDN block-lists which already exist, etc... People could still bypass such restrictions with a VPN but that would not apply to most small children. Sanctions and embargoes are always an option.

    • >fined

      Exactly. If you’re hurting kids to make more money selling porn videos, straight to jail.

      I’m glad there are solutions that won’t ruin the Internet. Now the uphill battle to convince our legislators (see: encryption & fundamentally technically ignorant calls for backdoors).

      I’m here to die on this hill!

  • > Back in the late 90s or so, there was a proposal

    This one: https://www.w3.org/PICS/

    • PICS was very complicated and attenpted to cover all possible "categories" of adult content. It was confusing, incomplete and only a handful of sites voluntarily labelled their sites with it. RTA is one simple static header that any site operator could add in seconds unless they get more complicated with it by dynamically adding it to individual videos say, on Youtube which means in that case the server application would need to send that header for any video tagged as adult.

      I added PICS to my forums but it was missing many categories of adult content. I ended up just selecting everything as I could not predict what people may upload which made for a very long header.

      2 replies →

  • People were wrong.

    We pay money online mostly through credit cards. Credit card transactions can be reversed. If children spend money on porn, those payments are likely to be reversed. This is really bad for the ability of the porn sites to continue receiving credit card payments, and continue making money.

    An age header is a trivial step that can reduce the odds of the adult site receiving payments that later get reversed. Win, win.

    But if someone is willing and able to pay, then the adult industry wants the choice of whether to access content to be up to them. If government tries to regulate them, they'll engage in malicious compliance - do the minimum to not be sued, in a way that they can still reach customers.

    For example Utah tried to institute age verification. The porn industry blocked all IP addresses from Utah. Business boomed for VPN companies in Utah. Everyone, including porn companies, knows that a lot of that is for porn. But if you show up with a Nevada IP address, the porn's position is, "You're in Nevada. Utah law doesn't apply." Even if the credit card has a Utah zip code.

    If you live in Utah, and you're able to purchase a VPN, the porn companies want your money.

    • >But if someone is willing and able to pay

      If someone is willing and able to pay, they have a source of money. If they aren't allowed to buy something, that control should be applied at the level where they get the money. If the child is using an adult's credit card, responsibility lies with the adult. If children need to have their own credit cards, the obvious point of control is the credit card itself.

      But also, most porn is ad-supported, pirated or free. Directly paid content is a small fraction. So all of this is moot for porn.

    • There's an anecdote about an attempt to ban porn in Utah, which cited a survey which found that most people were opposed to adult content. The defense argued that most people will oppose porn when asked in public in order to appear moral, even if privately they are avid consumers.

      As proof, they provided records of cable TV pay-per-view purchases in Utah. The defense won.

    • There was a random comment here on HN few days back that adult contents have lower chargeback rates than everything else.

      So ig stop spreading hallucinatory misinformations?

      2 replies →

Yeah this seems like the best tradeoff. You avoid the central control infrastructure and you provide information to clients. It's also a great match with free computing devices, which can then utilize the new information, empowering users (eg parents -> parental control on device, or individuals who want to skip some kinds of content).

There are issues today with this approach such as lacking granular information for sites that have many kinds of context, but if you stop investing in the central control infra and invest in this instead that could be remedied.

This doesn't address the wider array of age-verification related problems that people want to solve, like social media where age verification is needed to police interactions between users.

  • I could be misunderstanding the context but to me that sounds like a moderation issue assuming we even want small children on social media in the first place. There should probably be a dedicated child-safe social media site that limits what communication can take place for small children and has severe punishments for adults pretending to be children for the purposes of grooming.

    • Moderation is like law enforcement, it doesn't prevent crimes from happening it just punishes the people they can catch. There exist severe punishments for the kinds of behavior I'm talking about, but unsurprisingly, this does not stop kids from being harmed and it doesn't undo it.

      This isn't hypothetical, by the way. There are adults catfishing kids into producing CSAM [0], kidnapping and assaulting minors [1], [2], and in the most extreme case, there's a borderline cult of crazy young adults who do terrorize people for fun [3].

      It is a constant game of whackamole by moderators/admins to keep this behavior out of online spaces where kids hang out.

      I recognize that this is a "think of the children" argument, but indeed that's the point. The anonymous web was created without thinking about the children, just like how all social media was created without thinking about how it could be used to harm people. Age verification is the smallest step towards mitigating that harm.

      Now I disagree very strongly with the laws proposed (and indeed, I've been writing/calling/talking with state reps about this locally, because I don't want my state's bill passed). But the technical challenge needs to address the real problems that legislators are trying to go after.

      [0] https://www.justice.gov/usao-wdnc/pr/discord-user-who-catfis...

      [1] https://www.nbcnews.com/news/us-news/kidnapping-roblox-rcna2...

      [2] https://www.nbcmiami.com/news/local/nebraska-man-charged-wit...

      [3] https://www.fbi.gov/contact-us/field-offices/boston/news/ope...

      6 replies →

  • This is assuming children should be on social media at all, which I for one would debate.

Servers can then infer user’s ages by whether or not the client renders pages given those headers or not no? See if secondary page requests (e.g images, scripts) are made or not from a client? A bad actor could use this to glean age information from the client and see whether the person viewing the page is a small child. That should be scary

  • I disagree. The ability to render a page could simply mean that parental controls were not enabled on the device. Some parents have assessed the situation and trust their children to be psychologically ready for adult situations. The client could be literally any age.

    Today devices do not default to accounts being child accounts. Some day this may change and may require an initial administrator password or something to that affect but this can evolve over time.

    • >I disagree. The ability to render a page could simply mean that parental controls were not enabled on the device.

      Not being able to detect all children doesn't mean that being able to detect 80% of them is somehow less disturbing.

      2 replies →

  • That's true. But leaking an age threshold is not the same as private companies being able to link all your online activities to a single legal person.

  • Adults could also use this to filter out unwanted content without needing to rely on outdated filter lists.

How would this work with sites like YouTube which allow sharing of content, potentially not appropriate for children, but the content is generated by the site's users? Who will be fined for "violations"? And how would such a fine be levied, especially internationally?

  • I think that initially the onus would be on Youtube to figure this out. They have some very intelligent engineers. For example, if the Youtube client is receiving affiliate funds then they are easy to ID and fine. If they are random people then Youtube would have to share the violation data with the other countries and the US or UK would have to pressure those countries to participate in fining the end user. There could be financial incentives for the foreign country to participate. They can also just force label a video to be adult as they do today when enough people report it which is admittedly not uniformly applied.

  • This already has been solved. Youtube disables viewing via embeds for any content that has been age restricted. Either you view it on Youtube which requires logging in to see age restricted content in the first place, or you get the ! icon and the warning about needing to log in.

I agree with the general idea, but I would like this header to be more fine grained than just a binary "adult" or not. For example, so that you can distinguish between content that is age appropriate for teenagers and older from content that is suitable for all ages.

How are they supposed to fine sites out of their jurisdiction?

  • One possible method [1] though I am sure the network and security engineers here on HN could come up with simpler methods. Just blocking domains on the popular CDN's would kill access for most people as by default most browsers are using them for DoH DNS.

    [1] - https://news.ycombinator.com/item?id=47950843

    • The question was about fining entities outside of the original jurisdiction, so I am not sure what you have in mind that could be done by network/security engineers here.

      5 replies →

The header should be the other way around. It should inform your site will not contain adult material. The local government should scan sites participating.

Anyway, yes, that would just solve the problem and not destroy anything. What is the reason why nobody is talking about it.

>I a small server operator and a client of the internet will not participate in any other methods period, full-stop.

You will however follow the law if it mandates you to do else.

Which is we "age verification" should be stopped before it's too late.

  • I have probably never met anyone that is not committing at least three (3) felonies per day. That is at least how legal theory is applied. It's a fun topic to research. As a side note it would be interesting to see how far down the totem pole they venture in terms of verification of what sites are using age/ID verification and tracking.

> fine sites not participating into oblivion.

That would also amount to compelled speech.

  • That would also amount to compelled speech.

    I disagree. The legal requirement to apply a warning label is a well known, understood and accepted process that is applied to a myriad of hazards to children and adults. As just one example businesses in some states, most notably California are compelled to add warning labels to foods and other products that could cause cancer.

    • That's not the best example, since the levels set for Prop 65 warnings are so low that the warnings are effectively useless; every single commercial building in CA now somehow causes cancer.

      3 replies →

  • Clients could refuse to show content that does not have headers set.

    On other hand servers might choose to lie. After all that is their free speech right.

    So maybe you need some third party vetting list. Ofc, that one should be fully liable for any damages misclassification can cause... But someone would step up.

If they can scrape and fine, they can just make a list and the browser can use that.