I work at a European identity wallet system that uses a zero knowledge proof age identification system. It derives an age attribute such as "over 18" from a passport or ID, without disclosing any other information such as the date of birth. As long as you trust the government that gave out the ID, you can trust the attribute, and anonymously verify somebodies age.
I think there are many pros and cons to be said about age verification, but I think this method solves most problems this article supposes, if it is combined with other common practices in the EU such as deleting inactive accounts and such. These limitations are real, but tractable. IDs can be issued to younger teenagers, wallet infrastructure matures over time, and countries without strong identity systems primarily undermine their own age bans. Jurisdictions that accept facial estimation as sufficient verification are not taking enforcement seriously in the first place. The trap described in this article is a product of the current paradigm, not an inevitability.
According to the EU Identity Wallet's documentation, the EU's planned system requires highly invasive age verification to obtain 30 single use, easily trackable tokens that expire after 3 months. It also bans jailbreaking/rooting your device, and requires GooglePlay Services/IOS equivalent be installed to "prevent tampering". You have to blindly trust that the tokens will not be tracked, which is a total no-go for privacy.
These massive privacy issues have all been raised on their Github, and the team behind the wallet have been ignoring them.
> It also bans jailbreaking/rooting your device, and requires GooglePlay Services/IOS equivalent be installed to "prevent tampering".
Regulatory capture at its finest. Such a ruling gives Apple and Google a duopoly over the market.
Maybe worse, it encourages the push of personal computers to be more mobile like (the fact that we treat phones as different from computers is already a silly concept).
So when are we going to build a new internet? Anyone playing around with things like Reticulum? LoRA? Mesh networks?
> EU's planned system requires highly invasive age verification
EUDI wallets are connected to your government issued ID. There is no "highly invasive age verification".
We are literally sending a request to our government's server to sign, with their private key, message "this john smith born on 1970-01-01 is aged over 18" + jwt iat. There are 3 claims in there. They are hashed with different salts. This all is signed by the government.
You get it with the salts. When you want to prove you are 18+ you include salt for the "is aged over 18" claim, and the signed document with all the salts and the other side can validate if the document is signed and if your claim matches the document.
No face scanning, no driver license uploading to god-knows-where, no anything.
> to obtain 30 single use, easily trackable tokens that expire after 3 months
This is the fallback mechanism. You are supposed to use bbs+ signatures that are zero knowledge, are computed on the device and so on. It is supposed to provide the "unlinkability". I don't feel competent enough to explain how those work.
> jailbreaking / "prevent tampering"
This is true. The eidas directive requires that secret material lives in a dedicated hardware / secure element. It's really not much different than what a banking app would require.
> You have to blindly trust that the tokens will not be tracked
This is not true, the law requires core apps to be opensource. Polish EUDI wallet has been even decompiled by a youtuber to compare it with sources and check if the rumors about spying are true. So you can check yourself if the app tracks you.
Also we can't have a meaningful discussion without expanding on definition of "tracking".
Can the site owner track you when you verify if you are 18+? Not really, each token is unique, there should be no correlation here.
Can the government track you? No, not alone.
Can the site owner and the government collude to track you? Yes they can! Government can track all salts for your tokens, site can collect all salts, they can compare notes. There are so called policy mitigations currently: audits and requirements for governments to remove salts from memory the moment stuff is issued.
Can they lie? Sure.
Can the site owner and the government collude to track you if you are using bbs+? No. Math says no.
The inherent problem with all zero knowledge identity solutions is that they also prevent any of the safeguards that governments want for ID checking.
A true zero knowledge ID check with blind signatures wouldn't work because it would only take a single leaked ID for everyone to authenticate their accounts with the same leaked ID. So the providers start putting in restrictions and logging and other features that defeat the zero knowledge part that everyone thought they were getting.
> It also bans jailbreaking/rooting your device, and requires GooglePlay Services/IOS equivalent be installed to "prevent tampering".
The EUDI spec is tech neutral.
What the EUDI mandates is a high level of assurance under the eIDAS 2.0 regulation and the use of a secure element or a trusted execution environment to store the key.
I feel like you're glossing over a lot of uncomfortable but important implementation details here. None of this works without effectively banning personal computing and tying the whole system to secure attestation (which in practice means non-jailbroken apple & android devices). No thanks.
Can we go back to defaulting to parenting instead of nanny-states? Maybe make "age sensitive" websites include this fact into a header (or whatever) so that parents can decide who in their household can access which content. Instead of having some overreaching corpo-government implementing draconian "verification" systems.
If I want to live under the thumb of a strongly verified "benevolent" dictatorship, I'll move to China. No need to create a second China at home.
> It derives an age attribute such as "over 18" from a passport or ID, without disclosing any other information such as the date of birth.
How? If it’s analyzes my ID 100% client side I can fake any info I want. If my ID goes to a server, it’s compromised IMO.
I think the zero proof systems being touted are like ephemeral messaging in Snapchat. That is, we’re being sold something that’s impossible and it only “works” because most people don’t understand enough to know it’s an embellishment of capabilities. The bad actors will abuse it.
Zero proof only works with some kind of attestation, maybe from the government, and there needs to be some amount of tracking or statistics or rate limiting to make sure everyone in a city isn’t sharing the same ID.
Some tracking turns into tracking everything, probably with an opaque system, and the justification that the “bad guys” can’t know how it works. We’ve seen it over and over with big tech. Accounts get banned or something breaks and you can’t get any info because you might be a bad guy.
Does your system work without sending my ID to a server and without relying on another party for attestation?
There's no dynamic analysis done, necessarily. In the Swiss design, fex, SD-JWTs are used for selective disclosure. For those, any information that you can disclose is pre-hashed and included in the signed credential. So `over_18: true` is provided as one of those hashes and I just show this to the verifier.
The verifier gets no other information than the strictly necessary (issuer, expiry, that kind of thing) and the over 18 bit, but can trust that it's from a real credential.
That's not strictly a zero knowledge proof based system, though, but it is prvacy-preserving.
Yes it does actually. You load your ID into your phone with the MRZ and NFC. The cryptographic proof inside your ID is used to verify that it was issued by an official government. So your ID is not being sent to a central server.
The reusing another ID is an issue. In some countries they will have a in person check to verify only you can load your ID into your phone. But then you still have the problem of sending a verification QR code to someone else and have them verify it. This might be solved by rolling time-gated QR codes and by making it illegal to verify someone else's verifications. But this is a valid concern and a problem that still needs solving.
Attestation from government sounds like the ideal solution. This could actually provide _more_ privacy because we can begin using attestation for things we currently use IDs for such as “Has the privilege of driving a car” or “Can purchase alcohol”
> If it’s analyzes my ID 100% client side I can fake any info I want. If my ID goes to a server,
amplifying your point, there is effectively no way for the layperson to make this distinction. And because the app needs to send data over an encrypted channel, it would be difficult at best for a sophisticated person to determine whether their info is being sent over the wire.
Immigrants do not have an ID for up to a few years when they move to Germany. Just this week the Berlin immigration office stopped issuing plastic residence cards for budget reasons, so people get a sticker in their passport.
Passport recognition is also spotty. The ID verification providers used by banks do not recognise Indian passports.
Will we exclude a few million people because it’s too expensive to verify that they are over 18?
Add this to “falsehoods programmers believe about ID verification”.
> Will we exclude a few million people because it’s too expensive to verify that they are over 18?
Yes. We absolutely will. KYC services is something that no one wants and everyone hates, thus there is no motivation to make it better. And if any, "better" might mean more invasive, because that means more data to mine and sell.
So, sure, excluding millions of people from KYC because it's cheaper to reject them than it is to study their documents - is the right decision business wise.
I am speaking as a person in the very same position.
In Austria you don't need an Austrian passport/Personalausweis for a Digital ID registration. Your original passport (or equivalent) in combination with a certificate of residence, student permit or similar is fine.
If the age verification is going to mandate government issued ID, the government issuer can be the Trust Anchor issuing a Digitally Signed Credential for the zero knowledge proof - using any available open source zero-knowledge process:
Apologies that I'm latching onto your post for visibility, but for the sake of discussion - the European Identity Digital Wallet project specification and standardisation process is in the open and lives on github (yeah, the irony isn't lost on me :) ):
Everything's very much WIP, but it aims to provide a detailed Archictecture and Reference Framework/Technical Specifications and a reference implementation as a guideline for national implementations:
In your system, can companies verify age offline, or do they need to send a token to the Government's authority to verify it (letting the Government identify and track users)?
Switzerland is working on a system that does the former, but if Government really wants to identify users, they can still ask the company to provide the age verification tokens they collected, since the Government hosts a centralized database that associates people with their issued tokens.
Aren't the companies also expected to do revocation checking, essentially creating a record of who identified where, with a fig leaf of "pseudonymity" (that is one database join away from being worthless)?
That assumes the companies store the individual tokens, as does the government. Neither of which are part of the design, but could be done if both sides desired it.
The Swiss design actually doesn't store the issued tokens centrally. It only stores a trust root centrally and then a verifier only checks the signature comes from that trust root (slightly simplified).
this is slightly better but not the hero we want or need. zeero knowledge proofs are improvement over uploading raw documents, trust is still an issue here. why should users have to authenticate with a government-backed identity wallet to access platforms to play games or access a website in the first place. we didnt have any of these guards in the 90s and early 2000s and everybody turned out just fine . in fact the average gen z is in a lot worse place than we used to be despite that we had complete raw algorithm supervision free access to the internet with far more disturbing content (remember ogrish and KaZaA)
The average person does not understand the math behind zero-knowledge proofs. They only see that state infrastructure is gatekeeping their web access. Furthermore, if the wallet relies on a centralized server for live revocation checks, the identity provider might still be able to log those authentication requests, effectively breaking anonymity at the state level.
On a practical level, this method verifies the presence of an authorized device rather than the actual human looking at the screen. Unless the wallet demands a live biometric scan for every single age check, they will simply bypass the system using a shared family computer or a parent's unlocked phone. We used to find our way around any sort of nanny software (remember net nanny)
what you are describing still remains a bubble and I really hope Americans aren't looking at EU for any sort of public policy directions here.
> we didnt have any of these guards in the 90s and early 2000s and everybody turned out just fine
One of the most highly valued tech companies of today makes a software that sometimes talks its user's into killing themselves. Some guy put "uwu notices bulge" on a bullet casing and shot Charlie Kirk: things turned out fine indeed.
I think there's a tradeoff triangle here, not dissimilar to Zooko's triangle or the CAP theorem, where the three aspects are age verification, privacy, and the freedom to run custom software on devices of your choosing.
You can have no system at all, which gives you freedom and privacy, but not age verification. You can have ID uploads, which give you age verification and freedom, but not privacy. You can have a ZKP-based system, which gives you age verification and privacy, but not freedom. This is because you need a way to prevent one unscrupulous ID owner from issuing millions of valid assertions for any interested user.
> As long as you trust the government that gave out the ID
I'm a citizen of a European Union member, I trust my government to issue me an ID and use said ID in my interactions with the state, I do not trust my state with anything more than that.
That is exactly the trust I mean. You need to trust that country X gives out valid IDs. If you have sketchy company Y giving out IDs to everyone, you probably would not trust any attributes derived from that ID. If you trust that a country gives out valid IDs, you can trust the information derived from that ID. You do not really need to trust your government any more than that for this system to work.
Correct. A ZK Proof backed identity system is a significant bump up in both privacy and security to even what we have right now.
Everyone does realize we're being constantly tracked by telemetry, right?
A proper ZK economy would mitigate the vast majority of that tracking (by taking away any excuse for those in power to do so under the guise of "security") and create a market for truly-secure hardware devices, while still keeping the whole world at maximal security and about as close to theoretical optimum privacy as you're going to get. We could literally blanket the streets with cameras (as if they aren't already) and still have guarantees we're not being tracked or stored on any unless we violate explicit rules we pre-agree to and are enforceable by our lawyers. ZK makes explicit data custody rules the norm, rather than it all just flowing up to whatever behemoth silently owns us all.
In Amsterdam 1850 the municipality kept track of people's names, address, age, gender and religion (bevolkingsregister). It meant nothing at this time, but 90 years later the Nazi used these lists to murder jewish people going house by house. Thanks for the partisans setting this archive ablaze, life were saved.
I'm not saying it's right or wrong, you tell me, I just want to point at this random timeline.
I shudder when I think of how effective the Stasi would have been in the digital age. The only thing checking them was the labour demands of surveillance.
When Trump came into power a second time, and the ICE-nazification became apparent, I reached out to my government and asked them what they were doing to make it harder for "Trumpism" to happen here. No reply. Just crickets.
Hoovering up less data would be a really fucking good start. There's something about babies and bathwater, but by god this has proven to be very dangerous bathwater time and time again.
First let me clearly state that I appreciate the amount of thought you guys are putting into creating better systems that have high privacy guarantees. I concede to you, that in some situations, your system leads to better privacy.
But I don't look at this on a purely technological level. These identity-based systems are instruments of control. Right now everything is still in flux with how these tools will be used and how accessible they are to the general population and the many minorities therein. I simply don't trust our politicians to do the right thing short-term and long-term. The establishment of the GDPR has been a major victory for better privacy legislation and now the Commission wants to hollow it out. The Commission also wants chat control to increase the amount of mass surveillance in Europe.
There is a potential future, where we all win. But I am highly skeptical, that in the current political climate, we will end up there.
You mean that system that requires either to use an original unmodified Android phone, or a iOS phone and it does not work in absolutely anything else?
No it is open-source and portable to any platform you want. We currently support iOS and Android through Play store and F-droid, but that is just because most of the market is there at the moment.
This is true, but I think it's more that those jurisdictions don't actually care about something solving this securely so much as they want face scans for other purposes?
In that system does the age verification result come with some sort of ID linked to my government issued ID card? Say, if I delete my account on a platform after verifying and then create a new one, will the platform get the same ID in the second verification, allowing it to connect the two and track me? Or is this ID global, potentially allowing to track me through all platforms I verified my age on?
What a verification process looks like from the user perspective? Do I have to, as it happens now, pull out my phone, use it as a card reader (because I don't have a dedicated NFC device on my computer), enter the pin, and then I'll be verified on my computer so I can start browsing social media feed? Or, perhaps, you guys have come up with a simpler mechanism?
The wallet ecosystem is still really varied at the moment. Our implementation is unlinkable. So an issuer cannot track where you use the attribute. And a verifier cannot see that you've used the same attribute multiple times with their system. This is great for privacy and tracking protection, but not so great for other things. For example, people sending their QR codes to other people with the correct attribute (like maybe an underage person sending an 18+ check to an adult), is hard to solve for because they are unlinkable.
Most systems right now have you load data in your phone. Then when a check happens, you scan a QR code. You then get a screen on your phone saying X wants to know Y and Z about you, do you want to share this information? Then you just choose yes or no.
For your social media example. You would just get a QR code on your pc, then pull out your phone, scan and verify, then start browsing social media on your pc.
In the Swiss system, it depends on what they verified. If they required your full ID, that has a document number like a passport and they could track that.
If they did the right thing and only asked for the over 18 bit, then they wouldn't have a trackable identifier.
You are describing a situation where a pairwise pseudonymous identifier is generated. I don't think any real system does this with government IDs, but it might be possible.
This is really cool and I want it for inter-government identification. Eg country B can check a ZK proof that I'm a citizen of country A, allowed to drive, not a criminal, have a degree, etx
The requirement to use google or apple services is a deal breaker. If I can't verify my age using an EU wallet without having an account with a US tech company what is the point of any of this?
No one would be foolish enough to trust their government nor the EU. You should be ashamed of working for such "people". Thanks for helping implementing a surveillance state.
You don't have to trust your government to employ them, the key is to bake in and maintain rigorous checks and balances, demand transparency, routinely audit and fire people for corruption, etc.
I just don't want to have to ID myself at every corner of the internet. Whether the site receives my details or not.
I've heard they even want to mandate periodic re-checks now which is insane. The internet should remain free.
Besides, if parents don't want to give access to social media they can just not give their kids a phone, or just use the many parental control features available on it. Every phone has this these days.
And even if the government wants to ban this stuff for all kids (which I would not agree with but ok I don't have kids so I don't really care and parents do seem to want this), they don't have to enforce it this way. They can just make the parents liable if the kids are found to have access.
To me this is just another attempt at internet censorship and control.
Good luck finding the single government in the world that actually wants that, rather than it being a pretext for control that is too sweet to pass up. If you manage to find them, post an article on HN about it as top places to move to.
The system you're describing is good for the masses, not for those with power.
For me it is disqualified for usage because I need to buy into a Google or Apple ecosystem. At least the reference implementation does. This is just the next level of enshitification. And no, I don't need a digital blockwart at all.
And I have zero illusion privacy is compromised, it is trivial to identify devices these days, so it doesn't even work technically.
Next sentence we hear some empty bickering about digital sovereignty. This is all bullshit.
We support F-droid, so you could get a degoogled android version and use that to load the app on your phone. The app could also be ported to other platforms, but right now there is really no market for it.
Someone brought up the need for device attestation for trust purposes (to avoid token smuggling for example). That would surely defeat the purpose (and make things much much worse for freedom overall). If you have a solution that doesn't require device attestation, how does that solve the smuggling issue (are tokens time-gated, is there a limit to token generation, other things)?
We do not require an attestation and things like token smuggling is still a problem we need to solve. We have a system that prioritizes unlinkability. So an issuer cannot track the attribute they give you. And a verifier cannot link multiple disclosures with the same attribute. This privacy really helps things like token smuggling however. Time-gated tokens may increase the difficulty, but will probably not make it impossible. Making it illegal to verify someone else's qr codes could also help of course.
Age verification is very hard, because parents will give their children their unlocked account, and children will steal their parents' unlocked account. If that's criminalized (like alcohol), it will happen too often to prosecute (much more frequently than alcohol, which is rarely prosecuted anyways). I don't see a solution that isn't a fundamental culture shift.
If there's a fundamental culture shift, there's an easy way to prevent children from using the internet:
- Don't give them an unlocked device until they're adults
- "Locked" devices and accounts have a whitelist of data and websites verified by some organization to be age-appropriate (this may include sites that allow uploads and even subdomains, as long as they're checked on upload)
The only legal change necessary is to prevent selling unlocked devices without ID. Parents would take their devices from children and form locked software and whitelisting organizations.
It's my job as a parent (and I have several kids...) to monitor the things they consume and talk with them about it.
I don't want some blanket ban on content unless it's "age appropriate", because I don't approve that content being banned. (honestly - the idea of "age appropriate" is insulting in the first place)
Fuck man, I can even legally give my kids alcohol - I don't see why it's appropriate to enforce what content I allow them to see.
And I have absolutely all of the same tools you just discussed today. I can lock devices down just fine.
Age verification is a scam to increase corporate/governmental control. Period.
You should be able to choose what's age-appropriate for your kids. Giving them access to e.g. "PG-13" media when they're 9 isn't the problem. Giving mature kids unrestricted access isn't a problem. The problem is culture:
- Many parents don't think about restricting their kids' online exposure at all. And I think a larger issue than NSFW is the amount of time kids are spending: 5 hours according to this survey from 2 years ago https://www.apa.org/monitor/2024/04/teen-social-use-mental-h.... Educating parents may be all that is needed to fix this, since most parents care about their kids and restrict them in other ways like junk food
> Fuck man, I can even legally give my kids alcohol - I don't see why it's appropriate to enforce what content I allow them to see.
In the USA it depends on the state. Federal guidelines for alcohol law does suggest exemptions for children drinking under the supervision of their parents, but that's not uniformly adopted. 19 states have no such exceptions, and in many of the remaining 31, restaurants may be banned from allowing alcohol consumption by minors even when their parents are there.
Your kids can’t buy alcohol though. If you want to unlock your phone and let your kids read smut then more power to you. Age gates do not and never will stop that. But I sure as hell don’t want companies selling porn to 5 yr olds.
> Age verification is very hard, because parents will give their children their unlocked account, and children will steal their parents' unlocked account
More simply: If ID checks are fully anonymous (as many here propose when the topic comes up) then every kid will just have their friends’ older sibling ID verify their account one afternoon. Or they’ll steal their parents’ ID when they’re not looking.
Discussions about kids and technology on HN are very weird to me these days because so many commenters have seemingly forgotten what it’s like to be a kid with technology. Before this current wave of ID check discussions it was common to proudly share stories of evading content controls or restrictions as a kid. Yet once the ID check topic comes up we’re supposed to imagine kids will just give up and go with the law? Yeah right.
The older sibling should be old enough to know better. Or if they're still a kid, they can have their privileges temporarily revoked.
This problem probably can't be solved entirely technologically, but technology can definitely be a part of solving it. I'm sure it's possible to make parental controls that most kids can't bypass, because companies can make DRM that most adults can't bypass.
> If ID checks are fully anonymous (as many here propose when the topic comes up) then every kid will just have their friends’ older sibling ID verify their account one afternoon.
Exactly the same way that kids used in former days to get cigarettes or alcohol: simply ask a friend or a sibling.
By the way: the owners of the "well-known" beverage shops made their own rules, which were in some sense more strict, but in other ways less strict than the laws:
For example some small shop in Germany sold beverages with little alcohol to basically everybody who did not look suspicious, but was insanely strict on selling cigarettes: even if the buyer was sufficiently old (which was in doubt strictly checked), the owner made serious attempts to refuse selling cigarettes if he had the slightest suspicion that the cigarettes were actually bought for some younger person. In other words: if you attempted to buy cigarettes, you were treated like a suspect if the owner knew that you had younger friends (and the owner knew this very well).
Circumventing controls as a kid is what taught me enough about computers to get the job that made college affordable (in those days you could just boot windows to a livecd Linux distro and have your way with the filesystem, first you feel like a hacker, later the adults are paying you to recover data).
If we must have controls, I hope the process of circumventing them continues to teach skills that are useful for other things.
More simply: If ID checks are fully anonymous (as many here propose when the topic comes up) then every kid will just have their friends’ older sibling ID verify their account one afternoon. Or they’ll steal their parents’ ID when they’re not looking.
Digital ID with binary assertion in the device is an API call that Apple's app store curation can ensure is called on app launch or switch. Just checking on launch or focus resolves that problem. It's no longer the account being verified per se, it's the account and the use.
Completely agree. The internet works differently than how people want it to, and filtering services are notoriously easy to bypass. Even if these age-verification laws passed with resounding scope and support, what would stop anyone from merely hosting porn in Romania or some country that didn't care about US age-verification laws. The leads to run down would be legion. I think you could seriously degrade the porn industry (which I wouldn't necessarily mind) but it would be more or less impossible to prevent unauthorized internet users from accessing pornography. And of course that's the say nothing of the blast radius that would come with age-verification becoming entrenched on the internet.
> what would stop anyone from merely hosting porn in Romania or some country that didn't care about US age-verification laws
A government could implement the equivalent of China's great firewall. Even if it doesn't stop everyone, it would stop most people. The main problem I suspect is that it would be widely unpopular in the US or Europe, because (especially younger) people have become addicted to porn and brainrot, and these governments are still democracies.
The only needed culture shift is everyone should realize that it's ultimately the parents/teachers' duty to educate the kids.
If parents think it's okay for their kids to use Facebook/X/whatever somehow responsibly, they should not be punished or prosecuted for that. Yes, I do believe it applies to alcohol too.
It's how it works in physical world. We let the parents to decide whether hiking/swimming/football/walking to the school are too dangerous for their kids. We let the parents to decide which books are suitable for their kids. But somehow when it comes to the internet it's the government's job. I can't help but think there is an astroturf movement manufacturing the consent rn.
Just a personal anecdote from my life - I have set up Youtube account for my kid with correct age restrictions (he is 11). Also this account is under family plan so there are no ads.
My kid logs out of this account so he can watch restricted content. I wonder - what is PG rating for logged out experience?
You mean this culture shift is needed for the masses but I don't think that's the case. In my widest social circle I am not aware of anyone giving alcohol to young kids (yes by the time they are 16ish yes but even that's rare). Most guardians would willingly do similar with locked devices.
The real problem is that the governments/companies won't get to spy on you if locked devices are given to children only. They want to spy on us all. That's the missing cultural shift.
> Most guardians would willingly do similar with locked devices.
Considering the echo chamber in which I was at school, my friends would have simply used some Raspberry Pi (or a similar device) to circumvent any restriction the parents imposed on the "normal" devices.
Oh yes: in my generation pupils
- were very knowledgeable in technology (much more than their parents and teachers) - at least the nerds who were actually interested in computers (if they hadn't been knowledgeable, they wouldn't have been capable of running DOS games),
- had a lot of time (no internet means lots of time and being very bored),
- were willing to invest this time into finding ways to circumvent technological restrictions imposed upon them (e.g. in the school network).
The kids in your social circle are used to not having access to alcohol, but they're not used to not having access to social media.
Hypothetically, if every kid in your social circle had their device "locked", the adults would probably have a very hard time the kids away from their devices, or just relent, because the kids would be very unhappy. Although maybe with today's knowledge, most people will naturally restrict new kids who've never had unrestricted access, causing a slow culture shift.
And we need a standard where websites can self-rate their own content. Then locked devices can just block all content that isn't rated "G" or whatever.
I imagine there would be a set of filters, including some on by default that most adults keep for themselves. For example, most people don't want to see gore. More would be OK with sexual content, even more would be OK with swear words, ...
Wrong incentive. If you don’t give a shit about exposing children to snuff or porn, but do give a shit about page views and ad revenue, you obviously don’t rate your content or rate it as G to increase that revenue.
The whitelist would be decided by the market: the parents have the unlocked device, there are multiple solutions to lock it and they choose one. Which means that in theory, the dominant whitelist would be one that most parents agree is effective and reasonable; but seeing today's dominant products and vendor lock-in...
I mean look, there's a point where the manufacturers back off and entrust the parents.
Any parent can be reckless and give their children all kinds of things - poison, weapons, pornographic magazines ... at some point the device has enough protective features and it is the parents responsibility.
Digital media use is easier to conceal than weapons. My parents did not protect me from it growing up because they were not responsible, and I was harmed as a result. To this day they still do not realize I was harmed, because I did not tell them and we are not on speaking terms. Trying to be honest would have resulted in further rejection from them. This was on a personality level and I had no way to deal with this as a developing human.
I could not control how my parents were going to raise me, I was only able to play with the hand I was dealt. I hate the idea that parents are sacrosanct and do not share blame in these situations. At the same time, if this is just the family situation you're given and you're handed a device unaware of the implications, who is going to protect you from yourself and others online if your parents won't? Should anyone?
>parents will give their children their unlocked account, and children will steal their parents' unlocked account.
I think either is better than the staus quo. In the first case the parent is waiving away the protections, and in the second the kid is.
Even if a kid buys alcohol, I think it's healthier that they do it by breaking rules and faking ids and knowing that they are doing something wrong, than just doing it and having no way to know it's wrong (except a popup that we have been trained by UX to close without reading (fuck cookie legislation))
That would be the status quo if we had better parental controls.
Trying to enforce parental controls via regulation may only be as effective as Europe enforcing the DMA against Apple. But maybe not, because there's a huge market; if Apple XOR Android does it, they'll gain market share. Or governments can try incentive instead of regulation (or both) and fund a phone with better parental controls. Europe wants to launch their own phone; such a feature would make it stand out even among Americans.
Prove of adulthood should be provided by the bank after logging into a bank account. I'm sure parents just would let their bank details be stolen and such.
Of course no personal details should be provided to the site that requests age confirmation. Just "barer of this token" is an adult.
The "Bank identity" system in Czech Republic (and likely other countries) can be used to log into to various government services. The idea is that you already authenticated to the bank when getting the account, so they can be sure it is really sou when you log in - so why not make it possible for you to log in to other services as well if you want to ?
So we trust a bank more than the government that they won’t extend this to earn more money by disclosing more information? Bad idea. You need a neutral broker.
Just remove "parent" and "account" from the mix and all these. Tie the screen to the human and most of these challenges go away. This is what is trying to be achieved with these laws, so we may as well institute it that way.
I actually don't hate this??? As long as parents can set up their own whitelists and it's not up to the government to have the final say on any particular block.
The problem of "kids accessing the Internet" is a purposeful distraction from the intent of these laws, which is population-level surveillance and Verified Ad Impressions.
That is actually a very good solution that is respecting privacy. And is much more effective than asking everyone for ID when opening a website or app.
How does this solve the problem at all? You're just making more problems. Now you have to deal with a black market of "unlocked" phones. You're having to deal with kids sharing unlocked phone. Would police have to wal around trying to buy unlocked phones to catch people selling them to minors? What about selling phones on the internet, would they check ID now?
SOME parents give their children access to their ID. That is NOT the same as ALL parents, and therefore is not a reason not to give those parents a helping hand.
Even just informing children that they're entering an adult space has some value, and if they then have to go ask their parents to borrow their wallet, that's good enough for me.
It would not be solved without a culture shift. But with a culture shift, giving a kid an unlocked device would be as rare as giving them drugs.
I'm sure it will occasionally happen. But kids are terrible at keeping secrets, so they will only have the unlocked device for temporary periods, and I believe infrequent use of the modern internet is much, much less damaging than the constant use we see problems from today. A rough analogy, comparing social media to alcohol: it's as if today kids are suffering from chronic alcoholism, and in the future, kids occasionally get ahold of a six pack.
Definitively we should have constant verification of the current user with Face ID or similar tech. Every 5 minutes of usage, your camera is activated to check who’s using your phone and validates it. So much secure and safe. /s
This is Nirvana/Perfect Solution fallacy. That's like saying limiting smoking to 18 y/o was futile because teenagers could always have some other adult buy them cigs, or use fake IDs.
Well, age verification is the "we have to do something about this nebulous problem even if the best thing we can think of actually makes everything worse for everyone but it makes us feel better" fallacy, which is equally ridiculous.
I'm saying that, in today's culture, age-gating the internet is likely to be much less effective than age-gating alcohol or tobacco. Most kids spend an appalling amount of time on social media (think, 5 hours/day*); most kids didn't spend this much time or invest this much of their lives into drugs.
For the smoking analogy to fit, you'd have to have parents giving their children packs of cigarettes to play with and then being mad at Marlboro they figured out how to smoke them.
We'll try everything, it seems, other than holding parents accountable for what their children consume.
In the United States, you can get in trouble if you recklessly leave around or provide alcohol/guns/cigarettes for a minor to start using, yet somehow, the same social responsibility seems thrown out the window for parents and the web.
Yes, children are clever - I was one once. If you want to actually protect children and not create the surveillance state nightmare scenario we all know is going to happen (using protecting children as the guise, which is ironic, because often these systems are completely ineffective at doing so anyway) - then give parents strong monitoring and restriction tools and empower them to protect their children. They are in a much better and informed position to do so than a creepy surveillance nanny state.
That is, after all, the primary responsibility of a parent to begin with.
I know this is weird, but I'm in some ways not really sure who is on the side of freedom here. I get your position, but like. The whole idea of the promise of the internet has been destroyed by newsfeeds and mega-corps.
There is almost literally documented examples of Facebook executives twirling their mustaches wondering how they can get kids more addicted. This isn't a few bands with swear words, and in fact, I think that the damage these social media companies are doing is in fact, reducing the independence teens and kids that have that were the fears parents originally had.
I dunno, are you uncertain about your case at all or just like. I just like, can't help but start with fuck these companies. All other arguments are downstream of that. Better the nanny state than Nanny Zuck.
> I just like, can't help but start with fuck these companies. All other arguments are downstream of that.
The solution would then be to break them up or do things like require adversarial interoperability, rather than ineffective non-sequiturs like requiring them to ID everyone.
The perverse incentive comes from a single company sitting on a network effect. You have to use Facebook because other people use Facebook, so if the algorithm shows you trash and rage bait you can't unilaterally decide to leave without abandoning everyone still there, and the Facebook company gets to show ads to everyone who uses it and therefore wants to maximize everyone's time wasted on Facebook, so the algorithm shows you trash and rage bait.
Now suppose they're not allowed to restrict third party user agents. You get a messaging app and it can send messages to people on Facebook, Twitter, SMS, etc. all in the same interface. It can download the things in "your feed" and then put it in a different order, or filter things out, and again show content from multiple services in the same interface, including RSS. And then that user agent can do things like filter out adult content, if you want it to.
We need to fix the actual problem, which is that the hosting service shouldn't be in control of the user interface to the service.
> I just like, can't help but start with fuck these companies. All other arguments are downstream of that. Better the nanny state than Nanny Zuck.
Wild times when we're seeing highest voted Hacker News commenters call for the nanny state.
If you're thinking these regulations will be limited to singular companies or platforms you don't use, there is no reason to believe that's true.
There was already outrage on Hacker News when Discord voluntarily introduced limited ID checks for certain features. The invitations to bring on the nanny state reverse course very quickly when people realize those regulations might impact the sites they use, too.
A lot of the comments I'm seeing assume that only Facebook or other platforms will be impacted, but there's now way that would be the case.
For me this is a crux, at least in principle. Once online media is so centralized... the from argument freedom is diminished.
There are differences between national government power and international oligopoly but... even that is starting to get complicated.
That said... This still leaves the problem in practice. We get decrees that age-restriction is mandatory. There will be bad compliance implementations. Privacy implications.
Meanwhile a while... how much will we actually gain when it comes to child protection.
You can come up will all sorts of examples proving "Facebook bad" but that doesn't mean these things are fixed when/if regulation actually comes into play.
Those execs were also using the tactics to addict adults, and while they may have targeted teens, the problem is, at its core: humans. So no amount of nannying by either the company nor the government will solve this issue.
Who would be responsible if a child developed alcohol addiction? A nicotine problem? Any other addiction?
Exactly. The same people that should be responsible for giving them unfettered access to an internet that is no longer safe. Even adults have to be wary of getting hooked on scrolling, and while I agree that the onus is on the companies, it has been demonstrated over and over again that they will not be held to account for their behavior.
So the only logical choice left that actually preserves freedom is for parents to get off their ass and keep their child safe. Parent's that don't use filtering and monitoring software with their children should be charged with neglect. They are for sending a kid into the cold without a coat, or letting them go hungry, why is it different sending them onto the internet?
And to your last point: You are dead wrong. No government anywhere in the world has demonstrated that they have the resources, expertise, or technical knowledge to solve this problem. The most famously successful attempt is the Chinese Great Firewall, which is breached routinely by folks. As soon as a government controls what speech you are allowed to consume, the next logical step for them is to restrict what speech you can say, because waging war on what people access will always fail. I mean, Facebook alone already contains tons of content that's against its terms of service, and they have more money than God, so either they actually want that content there, or they are too understaffed to deal with the volume, and the volume problem only ever increases.
So in my view, you are the one against freedom by advocating for the government to control the speech adults can access for the sake of "protecting the children" when the actual people that are socially, morally, and legally culpable for that protection are derelict in their duties.
Social media is like tobacco. We went after tobacco for targeting kids, we should do the same to social media. Highly engineered addictive content is not unlike what was done to cigarettes.
Maximizing corporate freedom leads inevitably to corporate capture of government.
Opposing either government concentration of power alone or corporate concentration of power alone is doomed to failure. Only by opposing both is there any hope of achieving either.
Applying that principle to age-verification, which I think is inevitable: Prefer privacy-preserving decoupled age-verification services, where the service validates minimum age and presents a cryptographic token to the entity requiring age validation. Ideally, discourage entities from collecting hard identification by holding them accountable for data breaches; or since that's politically infeasible, model the service on PCI with fines for poor security.
The motivation for this regime is to prevent distribution services from holding identification data, reducing the information held by any single entity.
> I just like, can't help but start with fuck these companies. All other arguments are downstream of that. Better the nanny state than Nanny Zuck.
How about we reject all institutional nannies?
It is much easier to implement user-controlled on-device settings than any sort of over-the-Internet verification scheme. Parents purchase their children's devices and can adjust those settings before giving it to their kids. This is the crux of the problem, and all other arguments are downstream of this.
Don't conflate the Internet with Social Media. Social media is a service, just like FTP. The death of social media will not mean the death of the Internet. There's an argument that reducing social media use, by age verification or other means, will lead to a more free Internet due to reduced power of gatekeepers.
> I know this is weird, but I'm in some ways not really sure who is on the side of freedom here. I get your position, but like.
No one. You’ll see a few politicians and more individuals stuck to their principles, but anyone with major clout sees the writing on the wall and is simply working to entrench their power.
> Better the nanny state than Nanny Zuck.
Indeed, what lolberts fail to understand usually is not a choice between government vs “freedom” it’s a choice between the current government and whoever will fill up the power vacuum left by the government.
The problem is that internet is used nowadays for democratic purposes. Once you introduce a globally unique personal ID, you will be monitored. And boy, you will be monitored throughoutly. In case of any democratic process that needs to be undertaken in future against government, this very government will take the tools of identification and will knock to the doors of people who try to raise awareness and maybe mutiny. And this is what Orwell wrote about
>I get your position ... There is almost literally documented examples of Facebook executives twirling their mustaches wondering how they can get kids more addicted. This isn't a few bands
Their position was to compare it to alcohol, guns, and tobacco, not bands using naughty words. Alcohol and tobacco definitely enter mustache swirling territory, getting children addicted and funding misinformation on the harms of their product.
For all the complaining some U.S.-Americans seem to do about the EU approach to these issues, things like the Digital Markets Act aim to fix exactly these types of issues.
I appreciate GPs point about giving “parents strong monitoring and restriction tools and empower them to protect their children”. That’s good. That acknowledges that we can and should give parents tools to deal with their kids and not let them fend for themselves (one nuclear family all alone) against the various algorithms, child group pressure, and so on.
But on the whole I’m tired of the road to serfdom framing on anything that regulates corporations.
Yes. Let’s be idealistic for a minute; the Internet was “supposed to” liberate us. Now we have to play Defense every damn day. And the best we have to offer is a false choice between nanny state and tech baron vulturism?
For a second just imagine. An Internet that empowers more than it enslaves. That makes us more equal. It’s difficult but you can try.
> I know this is weird, but I'm in some ways not really sure who is on the side of freedom here.
That’s because “freedom” is complicated and doesn’t precisely map to the interests of any of the major actors. Its largely a war between parties seeking control for different elites for different purposes.
> documented examples of Facebook executives twirling their mustaches wondering how they can get kids more addicted
If you genuinely believe that this is about those moustache twirling executives, then I have a bridge to sell you.
Have you ever wondered why and how these systems are being implemented? Have you ever gone why Discord / Twitch / what have you and why now? Have you ever thought that this might be happening because of Nepal and the fears of another Arab spring?
I think too many people on this platform don't understand what this is about. This is about power. It's not about what's good for you or the children. Or for the constituents. It's about power. Real power. Karp-ian "scare enemies and on occasion kill them" power.
There are many ways in which such a system could be implemented. They could have asked people to use a credit card. Adult entertainment services have been using this as a way to do tacit age verification for a very long time now. Or, they could have made a new zero-knowledge proof system. Or, ideally, they could have told the authorities to get bent. †
Tech is hardly the first industry to face significant (justifiable or unjustifiable) government backlash. I am hesitant to use them as examples as they're a net harm, whereas this is about preventing a societal net harm, but the fossil fuel and tobacco industries fought their governments for decades and straight up changed the political system to suit them. ††
FAANG are richer than they ever were. Even Discord can raise more and deploy more capital than most of the tobacco industry at the time. It's also a righteous cause. A cause most people can get behind (see: privacy as a selling point for Apple and the backlash to Ring). But they're not fighting this. They're leaning into it.
Let's take a look at what Discord asked people for a second, the face scan,
If you choose Facial Age Estimation, you’ll be prompted to record a short video selfie of your face. The Facial Age Estimation technology runs entirely on your device in real time when you are performing the verification. That means that facial scans never leave your device, and Discord and vendors never receive it. We only get your age group.
Their specific ask is to try and get depth data by moving the phone back and forth. This is not just "take a selfie" – they're getting the user to move the device laterally to extract facial structure. The "face scan" (how is that defined??) never leaves the device, but that doesn't mean the biometric data isn't extracted and sent to their third-party supplier, k-Id.
There was an article that went viral for spoofing this, https://news.ycombinator.com/item?id=46982421 . In the article, the author found by examining the API response the system was sending,
k-id, the age verification provider discord uses doesn't store or send your face to the server. instead, it sends a bunch of metadata about your face and general process details.
The author assumes that "this [approach] is good for your privacy." It's not. If you give me the depth data for a face, you've given me the fingerprint for that face.
We're anthropomorphising machines. A machine doesn't need pictures; "a bunch of metadata" will do just fine.
We are assuming that the surveillance state will require humans sitting in a shadow-y room going over pictures and videos. It won't. You can just use a bunch of vectors and a large multi-modal model instead. Servers are cheap and never need to eat or sleep.
We can assume de facto that Discord is also doing profiling along vectors (presumably behavioral and demographic features) which that author described as,
after some trial and error, we narrowed the checked part to the prediction arrays, which are outputs, primaryOutputs and raws.
turns out, both outputs and primaryOutputs are generated from raws. basically, the raw numbers are mapped to age outputs, and then the outliers get removed with z-score (once for primaryOutputs and twice for outputs).
Discord plugs into games and allows people to share what they're doing with their friends. For example, Discord can automatically share which song a user is listening on Spotify with their friends (who can join in), the game they're playing, whether they're streaming on Twitch etc.
In general, Discord seems to have fairly reliable data about the other applications the user is running. Discord also has data about your voice and now your face.
Is some or all of this data being turned into features that are being fed to this third-party k-ID? https://www.k-id.com/
k-ID is (at first glance) extracting fairly similar data from Snapchat, Twitch etc. With ID documents added into the mix, this certainly seems like a very interesting global profiling dataset backstopped with government documentation as ground truth.
Somehow there's tens to hundreds of millions available for crypto causes and algorithmic social media crusades, but there's none for the "existential threat" of age verification.
You sound like someone that would work for the CIA or FBI if they offered you a job. Those are the types of people that I cannot and will not ever trust. I do not respect your opinion.
I don't get your point, at least not in relation to the GP post. I agree with GP, parents need to be more accountable. We as parents, and We should all be concerned about future children/generations, should be demanding more regulation to help force the change we need on this topic. We as a society need to treat SM like those other addictive product classes. The fact SM is addicting and execs try to juice it more, is frankly to be expected.
Vilify them all you want, but same has been done with nicotine products, alcohol products, etc. and to GPs point, we SM as a toy for our children to play with. We chose to change the rules (laws, regulations, etc) because capitalists can never be simply trusted to do what's best for anything except their bottom line. That's a fundamental law no different than inertia or gravity in a capitalistic society. That's why regulators exist. Until you regulate it, they will wear their villain badge and rake in the billions. It's easy to be disliked when the topic of your disdain is what makes you filthy rich (in other words, they don't care what you or I think of what they're doing).
Nobody is putting a gun to your head and forcing you to use Facebook or whatever other site. I quit using most social media over a decade ago. If you don't want to use it, or you don't want your children to use it, then don't use it.
I am a bit confused by that comment. Are parents social responsible to prevent companies from selling alcohol/guns/cigarettes to minors? If a company set up shop in a school and sold those things to minors during school breaks, who has the social responsibility to stop that?
when I was a kid in the early 90's, my state (and many others) banned cigarette vending machines since there was no way to prevent them being used by minors, unless they were inside a bar, where minors were already not allowed.
I think the argument is more around it being illegal so as to not be forced into playing "the bad guy". It's hard to prevent a level of entitlement and resentment if those less well parented have full access. If nobody is allowed then there's no parental friction at all.
Its unfortunate that the application of this rule is being performed at the software level via ad-hoc age verification as opposed to the device level (e.g. smartphones themselves). However that might require the rigimirole of the state forcibly confiscating smartphones from minors or worrying nepalise outcomes.
Parents can't easily prevent their kids from going to those kinds of stores once they're at the age where the parent doesn't need to keep an eye on them all the time and they can travel about on their own.
The difference though is that parents are generally the ones to give their kids their phones and devices. These devices could send headers to websites saying "I'm a kid" -- but this system doesn't exist, and parents apparently don't use existing parental controls properly or at all.
Well the parents entrust their kids to the school, so they would be the ones responsible for what goes on on their premises. In turn, school computers are famously locked down to the point of being absolutely useless.
Companies are legally prohibited from marketing and selling certain products like tobacco and alcohol because they historically tried to.
Parents are legally and socially expected to keep their kids away from tobacco and alcohol. You're breaking legal and social convention if you allow your kids to access dangerous drugs.
Capitalist social media is exactly as dangerous as alcohol and tobacco. Somebody should be held responsible for that, and the legal and social framework we already have for dealing with people who want to get kids addicted to shit works fairly well.
> give parents strong monitoring and restriction tools
The problem is that it's bloody hard to actually do this. I'm in a war with my 7yo about youtube; the terms of engagement are, I can block it however I want from the network side, and if he can get around it, he can watch.
Well, after many successful months of DNS block, he discovered proxies. After blocking enough of those to dissuade him, he discovered Firefox DNS-over-HTTPS, making it basically impossible to block him without blocking every Cloudflare IP or something. Would love to be wrong about that, but it seems like even just blocking a site is basically impossible without putting nanny-ware right on his machine; and that's only a bootable Linux USB stick away from being removed unless I lock down the BIOS and all that, and at that point it's not his computer and the rules of engagement have been voided.
For now I'm just using "policy" to stop him, but IMO the tools that parents have are weak unless you just want your kid to be an iPad user and never learn how a computer works at all.
Sounds like a smart kid, is part of you secretly proud of him for his tenacity?
Is it impractical to keep an eye on what he's doing on his computer, i.e. physically checking in on him from time to time?
How about holding him responsible for his own behavior, to develop respect for the rules you impose? Is it just hopeless, and if so how come? Is it impossible for him to understand why you don't want him watching certain content or why he should care about being worthy of your trust?
I remember when I was a kid that age there were rules and some were technically enforced. But if you found a way around the technical enforcement you were in huge trouble. The equivalent here would he been, if you used a proxy to watch what you weren't meant to, then you lose all screen time indefinitely. Sneaking around parents' rules was absolutely not on.
The obvious solution would be TLS interception and protocol whitelisting. Same as corporate IT. Stick the kids' devices on a separate vLAN if you don't want to catch all the other devices in the crossfire.
Still, there's an awful lot of excellent educational content on YouTube. It seems unfortunate to block access to that. Have you considered self hosting an alternative frontend for it?
At this point why not just emancipate him. Hook him up with an easy remote job, put a lock on his bedroom and hand him the keys, and make him start paying rent. Because I’m having trouble figuring out what part of society you’re preparing him for at this stage. Respectfully.
Putting controls on the machine you want to restrict is pretty normal. While I agree with your first sentence that it's hard for parents to get proper monitoring tools, the rest of this sounds like a self-imposed problem. If you don't want to mess with the actual machine then run a proxy it has to use.
You're understating the US's policy on recklessness. We have "attractive nuisances," which means that if you put a trampoline in your backyard, and a kid passing through sees it, decides to do a sick jump off of it, and breaks their leg, that was partly your fault for having something so awesome that kids would probably like.
> which means that if you put a trampoline in your backyard, and a kid passing through sees it, decides to do a sick jump off of it, and breaks their leg, that was partly your fault for having something so awesome that kids would probably like.
That's not exactly accurate. The two key parts of the attractive nuisance law are a failure to secure something combined with the victim being too young to understand the risks.
So if you put a trampoline in your front yard, that's an easy attractive nuisance case.
If you put a pool in your back yard with a fence and a locked gate, it would be much harder to argue that it was an attractive nuisance.
If a 17 year old kid comes along and breaks into your back yard by hopping a 6-foot tall fence, you'd also have a hard time knowing they didn't understand that their activities came with some risk. Most cases are about very young children, though there are exceptions
It would not be quite that simple. The trampoline (or pool, or whatever) would have to be visible, in a place children were likely to be, not protected by any reasonable amount of care, and the kid would have to be young enough to not know any better.
The legal doctrine is also not specific to the US, of course.
As a parent, I think you’re understating how difficult it is to provide a specific amount of internet access (and no more) to a motivated kid. Kids research and trade parental control exploits, and schools issue devices with weak controls whether parents like it or not. I’m way at the extreme end of trying to control access (other than parents who don’t allow any device usage at all) and it has been one loophole after another.
As somebody who is entirely for restrictions on internet / social media, I think you're missing the bigger picture here. First, you assume that parents have the technical knowhow to restrict their kids from specific sites. My parents used a lot of different tools when I was a kid, but between figuring out passwords, putting my fingerprint onto my mom's phone, and spoofing mac addresses, I always found a way around the restrictions so I could stay up later.
But let's assume the majority of parents can actually do this. The problem with social media is not an individual one! We've fallen into a Nash Equilibrium, a game theory trap where we all defect and use our phones. If you don't have a phone or social media nowadays you will have much more trouble socializing than those who do, even though everyone would be better off if nobody used phones. As a teenager, you don't want to be the only one without a phone or social media. And so I truly do think the only solution is with higher level coordination.
Now, it's possible that the government isn't the right organization to enforce this coordination. Unfortunately, we don't really have any other forms of community that work for this. People already get mad at HOA's for making them trim their lawn; imagine an HOA for blocking social media! I do think the idea of a community doing this would be great though, assuming (obviously) that it was easy to move on and out of, as well as local. This would also help adults!
So to be honest, I don't think parents have the individual power to fix this, even with their kids.
It's much easier to give individual users control over their own device than to give a centralized authority control over what happens on everyone's device over the Internet. Local user-controlled toggles are just easier to implement.
All parental moderation mechanisms can and should be implemented as opt-in on-device settings. What governments need to do is pressure companies to implement those on-device settings. And what we can do as open-source developers is beat them to the punch. Each parent will decide whether or not to use them. Some people will, some won't. It's not Bob's responsibility to parent Charlie's children. Bob and Charlie must parent their own children.
To the people arguing that parents are too dumb to control their children's tech usage because they themselves are tech-illiterate: millennia ago, we invented this new thing called fire. Most people were also "too dumb" to keep their children away from the shiny flames. People didn't know what it was or how dangerous it could be. So the tribe leader (who, by the way, gropes your children) proposed a solution: centralize control of all the fire. Only the tribe leader gets to use it to cook. Everyone else just needs to listen to him. Remember, it's all for you and your children's safety.
> we invented this new thing called fire [...] So the tribe leader (who, by the way, gropes your children) proposed a solution: centralize control of all the fire
Of all the things, a "save-the-children prolegomena to the Prometheus myth" certainly wasn't on my bingo card today. So thank you for that, but I'm not aware of any reports of fire-keeping in the way you've described. Societies and religions do have sacred traditions related to fire (like Zoroastrians) but that doesn't come with restrictions on practical use AFAIK.
Am I the only one that is repeatedly amused at how many smart people are just caving to making this about parents/children at all?
We've literally watched things unfold in real time out in the open in the last year I don't know how much more obvious it could be that child-protections are the bad-faith excuse the powers that be are using here. Combined with their control of broadcasting/social media, it's the very thing they're pushing narratives in lockstep over. All this to effectively tie online identities to real people. Quick and easy digital profiles/analytics on anyone, full reads on chat history assessments of idealogies/political affiliations/online activities at scale, that's all this ever was and I _know_ hackernews is smart enough to see that writing on the wall. Ofc porn sites were targeted first with legislation like this, pornography has always been a low-hanging fruit to run a smear campaign on political/idealogical dissidents. It wasn't enough, they want all platform activity in the datasets.
I can't help but feel like the longer we debate the merits of good parenting, the faster we're just going to speedrun losing the plot entirely. I think it goes without saying that no shit good parenting should be at play, but this is hardly even about that and I don't know why people take the time of day. It's become reddit-caliber discussion and everyone's just chasing the high of talking about how _they_ would parent in any given scenario, and such discussion does literally nothing to assess/respond to the realities in front of us. In case I'm not being clear, talking about how correct-parenting should be used in lieu of online verification laws is going to do literally nothing to stop this type of legislation from continually taking over. It's not like these discussions and ideas are going to get distilled into the dissent on the congressional floors that vote on these laws. It is in it's own way a slice of culture war that has permeated into the nerd-sphere.
Sorry, I know it's a hard line for parents to tread and it's really easy to criticize parenting decisions other people are making, but the "everyone else is doing it so I have to" always seems as lazy to me today, as it probably did to my parents when I said it to them as a teenager.
Is it more important to prevent your son from being weaponized and turned into a little ball of hate and anger, and your daughter from spending her teen years depressed and encouraged to develop eating disorders, or to make sure they can binge the same influencers as their "friends"?
this is the biggest problem, so many parents are head-in-the-sand when it comes to things that can damage a child’s mind like screen time, yet no matter how much you protect them if it’s not a shared effort it all goes out the window, then the kid becomes incentivized to spend more time with friends just for the access, and can develop a sense that maybe mom and dad are just wrong because why aren’t so-and-so’s parents so strict?
because their parents didn’t read the research or don’t care about the opportunity cost because it can’t be that big of a deal or it would not be allowed or legal right? at least not until their kid gets into a jam or shows behavioral issues, but even then they don’t evaluate, they often just fall prey to the next monthly subscription to cancel out the effects of the first: medication
Its more like age verification corporations, identity verification corporations, the child "safety" organizations that were lobbying for Chat Control versus individuals who want to protect their privacy.
A monitoring solution might have worked for my case if my parents had monitored my Internet history, if they always made sure to check in on what I thought/felt from what I watched and made sure I felt secure in relying on them to back me up in the worst cases.
But I didn't have emotionally mature parents, and I'm sure so many children growing up now don't either. They're going to read arguments like these and say they're already doing enough. Maybe they truly believe they are, even if they're mistaken. Or maybe they won't read arguments like these at all. Parenting methods are diverse but smartphones are ubiquitous.
So yes, I agree that parents need to be held accountable, but I'm torn on if the legal avenue is feasible compared to the cultural one. Children also need more social support if they can't rely on their parents like in my case, or tech is going to eat them alive. Social solutions/public works are kind of boring compared to technology solutions, but society has been around longer than smartphones.
The world is becoming increasingly more uncertain geopolitically. We have incipient (and actual) wars coming, and near term potential for societal disruption from technological unemployment. Meanwhile social media has all but completely undermined broadcast media as a means of social control.
This isn't about protecting children. It's about preventing a repeated of the Arab Spring in western countries later this decade.
"Think of the children" is the oldest trick in the book, and should always be met with skepticism.
GP isn't interested in protecting children either. Punishing parents harder does nothing to improve the lives of children — in fact it makes them much worse, because now they are addicted to Facebook and their parents are in jail. It just makes certain people feel morally righteous that someone got punished.
> then give parents strong monitoring and restriction tools and empower them to protect their children
I think this is the right way to solve the problem.
For example, I think websites should have a header or something that indicates a recommended age level, and what kinds of more mature content and interactions it has, so that filters can use that without having to use heuristics.
As a parent blocking websights is a joy, maybe the rule should be to allow guardians more ability to control that. Trying to block some services is not trivial
As a human, I'd love to see the rest of you fools quit that. If HN ever starts to algorithm me I'll be gone too.
And as a bonus you can block your boomer parent's access to cnn and msnbc (or whatever its called now) and perhaps fox. It will make Thankgiving a lot more pleasant for all.
PS Mom, I don't know why cnn doesn't work anymore. ;)
It's very easy to lock up alcohol/cigarettes, a child should never have access. Internet usage is more like broadcast media, a child should have regular access.
The positives and negatives of Internet usage are more extreme than broadcast media but less than alcohol/guns. The majority of people lack the skills to properly censor Internet without hovering over the child's shoulder full-time as you would with a gun. Best you can do is keep their PC near you, but it's not enough.
We agree that a creepy surveillance nanny state is not the solution, but training parents to do the censorship seems unattainable. As we do for guns/alcohol/cigarettes, mass education about the dangers is a good baseline.
EDIT: And some might disagree about never having access to alcohol!
Devices such as phones come with an option when you start the device asking simply is this for a child or an adult. Your router generally these days comes with a parental filter option on start up too.
Heck we have chatgpt that can guide a parent through setting up a system if they want something more custom.
If people want to push, they should just push to make these set up options more ubiquitous, obvious and standardized. And perhaps fund some advertising for these features.
This is where Apple, Microsoft and Android need to step up. Indeed they already have in many ways with things being better than they used to be.
There needs to be a strict (as in MDM level) parental control system.
Furthermore there needs to be a "School Mode" which allows the devices to be used educationally but not as a distraction. This would work far better than a ban.
Ironically, the government that is pushing this only set a drinking age just a couple of years ago (as in the last 10 years). In case you believed this was actually about kids.
The speech that worked (mostly) on the children in my life involved the concept of 'cannot unsee', which they seemed to understand. There are some parallels to gun safety here because there are things that even the adults in your life try not to do and it seems perfectly reasonable to expect the same from children.
In fact being held to a standard that adults hold themselves to is frequently seen as a rite of passage. I'm a big girl now and I put on my big girl pants to prove it.
parents can be held liable for buying their kids cigarettes but, similarly, tobacco companies are (at least nominally) not supposed to target children in their advertising campaigns and in the design of their products.
It's obviously not a 1/1 comparison here, because providing ID to access the internet is not analogous to providing ID to purchase a pack of Cowboy Killers but we can extrapolate to a certain extent.
The richest brightest minds of our generation all being motivated towards one addictive goal, and we'll just put the responsibility on the parents...I think society can collectively do better.
What? Are there billion dollar companies with huge staffs who are constantly trying to figure out how to sneak my child a gun all the time, at school, wherever they go?
I'd say this comparison is good -- we as a society have decided that people who provide alcohol, guns, and cigarettes are responsible if children are provided them. You don't get to say 'hey, you didn't watch your child, they wandered into my shop, I sold to them 2 liters of vodka and a shotgun'.
You can expect the individual to compensate for a poorly structured society all you want.
> you can get in trouble if you recklessly leave around or provide alcohol/guns/cigarettes for a minor to start using
You can only expect so much from individual responsibility. At some point you need to structure society to compensate for the inevitable failures that occur.
> They are in a much better and informed position to do so than a creepy surveillance nanny state.
I'd rather live in a nanny state than ever trust american parenting. We've demonstrated a million times over that that doesn't work and produces even more fucked up people and abused children.
If we expect Parents to treat Social Media like other unhealthy, dangerous, and highly addictive products, then that can never start with "just expect ignorant parents to all magically start doing something difficult, for no real reason".
It starts by banning kids from the internet, entirely. It starts with putting age restrictions on who can buy internet connected devices. It starts by arresting parents and teachers who hand pre-literacy kids an always-online iPad. It starts with an overwhelming propaganda campaign: Posters, Commercials, After-School Specials, D.A.R.E. officers, red ribbon week.
Then, ultimately, it still finishes with an age-gated internet where every adult is required to upload their extremely valuable personal information to for-profit companies, for free
(With the added weight of being forced to agree to extreme ToS, like arbitration agreements).
So what do we do? I agree that the age of entry to the internet should match other vices (currently 21+ in the US, although really that should probably be 18+)...
It will never be acceptable for a single country's police state to extend across international borders, so... we just ban all of the UK and Australia from every web service until they get withdrawals and promise to stay nice? That could be a start.
But this whole situation in like 'freedom of speech' once you start picking and choosing what counts as "acceptable" speech, then suddenly you lose everything. You literally can't make everyone happy, because everything subjective is open to contradiction - and because there are freaks in the world who will never be satisfied by anything less than a complete global ban of everything.
Who gets a say? Do the Amish get to tell us what we are allowed to do? Where do you draw the line? You can't.
Completely open is the only acceptable choice.
But I still vote we start publicly mocking the parents who give their kids an ipad, and treat them like they just gave that kid a cigarette. Because seriously, they're ruining that kid.
I've already thought about it from the US's perspective and here's my path forward.
If government does not want kids to have access to the naughty bits of the internet but thinks there's something worth sharing with children then the government should provide a public internet for kids and THATS the site that will ask for a login known to belong to a kid. We already do public schooling with public funding and we do not let rando adults sit in classrooms with kids and they get a school id. Boot <18's off the public internet AT THE SOURCE when internet connectivity is PURCHASED / CONTRACTED FOR with a valid adult id / proof of age, but allow them vpn access into whatever the government thinks the child should have access to, like the schools page, I would say online encyclopedias or wikipedia type things but I'm not sure if government wants children to read about the variety of so many different things on this planet we're sorta trapped on and lets face it, restricting communications of the kids to points outside the control of parents is exactly what the government is complaining about, the government does not want kids to have free access to information.
Think of a phone or tablet that can only access the network through either a proxy or vpn but otherwise locked down. It certainly seems like it doesn't require much programming, heck have trump vibe code it for all I care.
I mean yeah, parents could just teach their kids the tough stuff because thats how it used to work anyways, well that and the libraries and schools but those can be pruned of bad books and bad teachers at the request of government anyways right? The kids could also be interviewed periodically by government to inventory what topics they have discussed with their parents to weed out the 't' or 'g' words.
I mean yeah I don't see a place for facebook in that intranet but isn't that sort of the point, we all know big social media will be incentivised to promote engagement with less regard for safety, so why do kids need facebook anyways? The instagrams and ticktalks are worse although maybe government should make a child friendly ticktalk type school social network, call it trumps school for kids for all I care, folks in power right now and a significant part of the US believe that trump knows whats good for kids right?
I mean obviously the libraries have to be REALLY REALLY cleaned up but thats just a detail. But why are parents forcing internet wierdos onto their kids with these smartphones / porno studios in their pockets? What do they think chester the molester on ticktalk is gonna have the kid upload their id? even if he does, do we really want that? c'mon man
The difference with guns, tobacco and alcohol is huge: all negatives aside, giving kids what they want makes the life of a parent so much easier. Take it away and many parents will fight. Sugar is in the same game.
you can’t blame it on parents alone, but the odds are stacked against children and their parents, there are very smart people whose income depends on making sure you never leave your black mirror
the surveillance state is possible, achievable, and a few coordination games away from deployment with backing from a majority who should know better
Except companies provide wholly inadequate safeguards and tools. They are buggy, inconsistent, easily circumvented, and even at time malicious. Consumers should be better able to hold providers accountable, before we start going after parents.
The only real solution is to keep children off of the internet and any internet connected device until they are older. The problem there is that everything is done on-line now and it is practically impossible to avoid it without penalizing your child.
If social media and its astroturfers want to avoid outright age bans, they need to stop actively exploiting children and accept other forms of regulation, and it needs to come with teeth.
> Except companies provide wholly inadequate safeguards and tools. They are buggy, inconsistent, easily circumvented, and even at time malicious. Consumers should be better able to hold providers accountable, before we start going after parents.
We could mandate that companies that market the products actually have to deliver effective solutions.
Yes, but how on earth is their malicious compliance at providing parental controls a good reason to go for the surveillance state that hurts absolutely everyone?
Social media operators love the surveillance state idea. That's why they aren't pushing against this.
I even cancelled YT Premium because their "made for kids" system interfered with being able to use my paid adult account. I urge other people to do the same when the solutions offered are insufficient.
The parents themselves weren't raised with the digital literacy required.
This doesn't put the parents off the hooks, if you or anyone can share any resources that are as easily consumable, viral and applicable as the content that is the issue that can reach parents I would be happy to help it spread.
The reality is kids today are facing the most advanced algorithms and even the most competent parents have a high bar to reach.
The solution is simple.
I want to permit whatever the pixels are on a childs screen. Full stop. That hasn't been solved for a reason. Because developing such a gate would work and not allow algorithms to reach kids directly and indirectly.
The alternative is not ideal, but until there's something better, what it will be and that's well proven for the mental health side of things of raising resilient kids who don't become troubled young adults - no need for social media, or touch screens until 10-13.
There are lots of ways to create with technology, and learning to use words (llms) and keyboards seems to increasingly have merit.
> "The parents themselves weren't raised with the digital literacy required."
At this point, that isn't true anymore. There was social media when the parents were school aged. The world didn't start when you were 10 and the Internet is a half century old.
So we should trust the governments of the world? The same governments that don't seem to be doing anything about a large group of people that visited a specific island to abuse minors?
> We'll try everything, it seems, other than holding parents accountable for what their children consume.
It’s not a fair fight. These are multi-billion dollar companies with international reach and decades of investment and research weaponized against us to make us all little addicts.
Additionally, it’s not fair or reasonable to ask parents to screen literally everything their kids do with a screen at all times any more than it was reasonable for your parents to always know what you were watching on TV at all times.
This is bootstraps/caveat emptor by a different name. It’s not “I want someone else to raise my kids.” It’s “the current state of affairs shouldn’t be so hostile that I have to maintain constant digital vigilance over my children.” Hell if you do people then lecture you about how “back in their day they played in the street and into the night” and call you a helicopter parent
You are screwed but not for the reason you claim. Its because you don't take any accountability for yourself. There is/was no hope for someone who does that at any point in human history. Is it fair? Nope...but it also doesn't mean you have 0 autonomy.
None of this push has anything to do with protecting children. Never has, never will. Stop helping them push the narrative, it's making the problem WAY worse.
ITT are a lot* of tech workers who made their money as a cog in the system poisoning the internet that future generations would have to swim in. I wonder if toxic waste companies also tell the parents it's strictly on them to keep their kids out of the lakes that are poisoned, but once flowed cleanly?
We live in a shared world with shared responsibilities. If you are working on a product, or ever did work on a product, that made the internet worse rather than better, you have a shared responsibility to right that wrong. And parents do have to protect their kids, but they can't do it alone with how systematically children are targeted today by predatory tech companies.
Age verification tech companies are lobbying heavily for governments to legally require their services. The proposed "solutions" are about funneling money into the hands of other tech companies and shady groups, while violating user privacy.
If anything, we should be banning the collection of any age related information to access social media and more mature content. We need companies to respect privacy, rather than legislation even more privacy violations.
If your goal is to "save the children", then sure, we can discuss this... if your goal, as a government, is to have everyone get some digital ID and tie their online identities to their real names, then you do just that.
We should stop pretending these age-verification rollouts are about protecting children, because they aren't and never have been.
Even if the world was full of responsible parents, there are still people and groups that want to establish a surveillance state. These systems are focused on monitoring and tracking online activity / limiting access to those who are willing to sacrifice their own personal sovereignty for access to services.
There is most definitely a cult that is obsessed with the book of revelation and seeing Biblical prophecy fulfilled, and if that isn't readily obvious to folks at this juncture in time, I'm not sure what it will take. I guess they'll have to roll out the mark of the beast before people will be willing to admit it.
It's funny, all the bible wankers screamed about "the mark of the beast" over things like RealID. Now we have fascists setting up surveillance and censorship tools to tie speech and movement to centralized ID...and they're lining up to lick boots.
You should need to show ID and prove you're over 18 to enter a church. At least we know they're actually harmful to children.
>> In the United States, you can get in trouble if you recklessly leave around or provide alcohol/guns/cigarettes for a minor to start using, yet somehow, the same social responsibility seems thrown out the window for parents and the web.
So anyone can walk into a shop and purchase these things unrestricted? It's not the responsibility of the seller too?
Tobacco, yes you can order pipe tobacco and cigars online sent to your house without ID.
Guns yes, you can buy a schmidt-rubin cartridge rifle or black powder revolver sent straight to your home from an online (even interstate) vendor no ID or background check, perfectly legal.
Alcohol yes, you can order wine straight to your house without ID.
These are all somewhat less known "loopholes" but not really turned out to be a problem despite no meaningful controls on the seller. You probably didn't even know about these loopholes, actually -- that's how little of a problem it's been.
>We'll try everything, it seems, other than holding parents accountable
The government took over most parenting functions, one at a time, until the actual parent does or is capable of doing very little parenting at all. If the government doesn't like the fact that it has become the parent of these children, perhaps it shouldn't have undermined the actual parents these last 80 years. At the very least, it should refrain from usurping ever more of the parental role (not that there is much left to take).
You yourself seem to be insulated from this phenomena, maybe you're unaware that it is occurring. Maybe it wouldn't change your opinions even if you were aware.
>If you want to actually protect children
What if I don't want to protect children (other than my own) at all? Why would you want to be these children's parents (you suggest you or at least others want to "protect" them), which strongly implies that you will act in your capacity as government, but then get all grumpy that other people are wanting to protect children by acting in their capacity of government?
The expectations on parents in USA are at their historical high. What are you talking on about in here. The expectation that parents will perfectly supervise them at every moment of their life till their adulthood is a.) new b.) at its historical max.
> holding parents accountable for what their children consume
There is a local dive bar down the street. I haven't expressly told my kids that entering and ordering an alcoholic drink is forbidden. In fact, that place has a hamburger stand out front on weekends and I wouldn't discourage my kids from trying it out if they were out exploring. I still expect that the bartender would check their ID before pulling a pint for them.
It takes a village to raise a child. There are no panopticons for sale the next isle over from car seats. We are doing our best with very limited tooling from the client to across the network (of which the tremendously incompetent schools make a mockery with an endless parade of new services and cross dependencies). It will take a whole of society effort to lower risks.
That same argument doesn't hold water on the internet. Its a communication medium. Its like a flow of information. You don't enter or leave physically spaces. the information flows to you where ever you are. trying to apply the same kinds of laws to the internet is a recipe for disaster because you are effecting everyone at the same time.
age verification doesn't work in favor of a tech corp like facebook as they will see some users leave, some because they don't have the age required and some because they don't want to do the verification
> We'll try everything, it seems, other than holding parents accountable for what their children consume.
The way to keep kids from eating (yummy) lead-based paint chips was not holding parents accountable to what their kids ate, but banning lead-based paint.
This tired argument again. It doesn’t work. It’s like keeping your kid from buying alcohol but all their friends are allowed to buy it. The whole age demographic has to be locked out of the ecosystem.
Well, yes. If your friends can all go 'round to David's house, where David's parents hand each child a case of beer and send them on their way, any attempt by the other parents to prohibit underage drinking is going to be ineffective. But most parents don't do that. (I've actually never heard of it.) So social solutions involving parent consensus clearly do work here.
"But it's behavioural!" I hear you cry. "What's stopping children from going out, buying a cheap unlocked smartphone / visiting their public library / hacking the parental control system, and going on the internet anyway?" And that's an excellent objection! But, what's stopping children from playing in traffic?
The thing is, what are the parents to do beyond restricting things? You find out some creep has been talking to Junior; do you talk to your local police department, state agency, or to the feds?
We've never properly acted upon reports of predators grooming children by investigating them, charging them, holding trials, and handing down sentences on any sort of large scale. There's a patchwork of LEOs that have to handle things and they have to do it right. Once the packets are sent over state lines, we have to involve the feds, and that's another layer.
Previously, I would have said it's up to platforms like Discord to organize internal resources to make sure that the proper authorities received reports, because it felt like there were instances of people being reported and nothing happening on the platform's side. Now, given recent developments, I'm not sure we can count upon authorities to actually do the job.
> The thing is, what are the parents to do beyond restricting things?
Well, I can't speak for parents (as in all parents). I can, however, tell you what we did.
When two of my kids were young we gave them iPods. The idea was to load a few fun educational applications (I had written and published around 10 at the time). Very soon they asked for Clash of Clans to play for a couple of hours on Saturdays. We said that was OK provided they stuck to that rule.
Fast forward to maybe a couple of months later. After repeated warnings that they were not sticking to the plan and promises to do so, I found them playing CoC under the blankets at 11 PM, when they were supposed to be sleeping and had school the next day.
I did not react and gave no indication of having witnessed that.
A couple of days later I asked each of them to their room and asked them to place their top ten favorite toys on the floor.
I then produced a pair of huge garbage bags and we put the toys in them, one bag for each of the kids.
I also asked for their iPods.
No anger, no scolding, just a conversation at a normal tone.
I asked them to grab the bags and follow me.
We went outside, I opened the garbage bin and told them to throw away their toys. It got emotional very quickly. I also gave them the iPods and told them to toss them into the bin.
After the crying subsided I explained that trust is one of the most delicate things in the world and that this was a consequence of them attempting to deceive us by secretly playing CoC when they knew the rules. This was followed by daily talks around the dinner table to explain just how harmful and addictive this stuff could be, how it made them behave and how important it was to honor promises.
Another week later I asked them to come into the garage with me and showed them that I had rescued their favorite toys from the garbage bin. The iPods were gone forever. And now there was a new rule: They could earn one toy per month by bringing top grades from school, helping around the house, keeping their rooms clean and organized and, in general, being well behaved.
That was followed by ten months of absolutely perfect kids learning about earning something they cherished every month. Of course, the behavior and dedication to their school work persisted well beyond having earned their last toy. Lots of talks, going out to do things and positive feedback of course.
They never got the iPods back. They never got social media accounts. They did not get smart phones until much older.
To this day, now well into university, they thank me for having taken away their iPods.
So, again, I don't know about parents in the aggregate, but I don't think being a good parent is difficult.
You are not there to be an all-enabling friend, you are there to guide a new human through life and into adulthood. You are there to teach them everything and, as I still tell them all the time, aim for them to be better than you.
its like the food industry blaming parents, sugar like apps/games are designed to be addictive to the point they are act like a drug, stop the drug dealer, not the consumer.
Blaming parents is a bit unwarranted, when on the other end we have business interests driven by perverse incentives of predating on children’s gullibility for their own profit.
When you say “We‘ll try everything” that is simply not true, in particular what we do not try is strict consumer protection laws which prohibits targeting children. Europe used to have such laws in the 1980s and the 1990s, but by the mid-1990s authorities had all but stopped enforcing them.
We have tried consumer protection, and we know it works, but we are not trying it now. And I think there is exactly one reason for that, the tech lobby has an outsized influence on western legislators and regulators, and the tech industry does not want to be regulated.
It is literally the parents responsibility. You want to blame someone else. Raising a kid doesn't mean letting society raise them you have to make tough choices.
If parents can't handle that they can give them up to the state.
> We'll try everything, it seems, other than holding parents accountable for what their children consume.
You've missed the point. No legislator or politician cares about what the parents are doing.
What they care about is gaining greater control of people's data to then coerce them endlessly (with the assitance of technology) into acting as they would liike. To do that, they need all that info.
"The children" is the sugar on the pill of de-anonymised internet.
A physical realm that is safe for children to explore in their own is clearly preferable to one where it’s transgressive to let a child go outside without an escort.
It is plausible that the same applies to the digital realm.
Looking at actual data regarding Australia's landmark legislation setting a minimum age of 16 for social media access with enforcement starting on December 10, 2025 indicates weakened data protection. The Australian data suggests that while the legislation has successfully cleared the decks of millions of underage accounts (4.7 million account deactivations together with increased VPN usage and "ghost" accounts to bypass restrictions), it has simultaneously forced platforms to rely on third-party identity vendors, with the following failures so far:
1) Persona (Identity Vendor) Exposure (Feb 20, 2026): researchers discovered an exposed frontend belonging to Persona, an identity verification vendor used by platforms like Discord. This system was performing over 260 distinct checks, including facial recognition and "adverse media" screening, raising massive concerns about the scope creep of age verification.
2) Victorian Department of Education (Jan 2026): a breach impacting all 1,700 government schools exposed student names and encrypted passwords. This is a primary example of how child-related data remains a high-value target.
3) Prosura Data Breach (Jan 4, 2026): this financial services firm suffered a breach of 300,000 customer records.
4) University of Sydney (Dec 2025): a code library breach affected 27,000 people right as the new legislation was rolling out.
I was hoping for more on "... The only way to prove that you checked is to keep the data indefinitely." What do the laws say on this? What data is this? I would have assumed that just like a bouncer can check my ID and hand it back to me, a digital system can verify my identity and not hold onto everything (e.g. the actual photo of my ID).
If we're going to do this at all, it should be on the device, not the website/app. Parents flag their child's device or browser as under 18, and websites/apps follow suit. Parents get the control they're looking for, while service providers don't have to verify or store IDs. I guess it's just more difficult to pressure big dogs like google/apple/mozilla for this than pornhub and discord.
I’ve wondered if a age verification gig worker app could ever be viable: have people you can meet in person to prove your age without ever uploading any PII anywhere. Then issue a private key proving you are who you say you are.
There are alternatives to ID verification if the goal is protecting children.
You could, for example, make it illegal to target children with targeted advertising campaigns and addictive content. Then throw the executives who authorized such programs in jail. Punish the people causing the harm.
If targeting children with advertising got corporate execs thrown in jail, wouldn't the companies just roll out age verification for users like they do now? How would this rule change their behavior? They have to know who the children are to not target them.
Stronger punishment creates more of an incentive to age verify. Which is basically why it's happening now.
> They have to know who the children are to not target them.
There is a difference between identifying specific children, and running programs that target children more generally; and / or having research that shows how your product harms children, and failing to do anything to stop it. We can tackle both of those issues without requiring age verification. We're headed down the path of age verification because we know now that not only is social media harmful, it's especially harmful to kids, and has been specifically targeted to them. Those are things that can be fixed, regardless of how you feel about age verification. Its not different than tobacco being not allowed to create advertisements for kids; its the same type of people doing the same types of things in the end.
How about instead of protecting the kids we protect everyone and do something to stop these gargantuan companies from driving engagement at any cost in order to rake ad revenue.
I’ve never met a single person who believed Facebook was a force for good. Why allow it to exist in its current form at all.
If these companies are the new town square then make a real online town square run by the government people can use if they so choose and then break up the monopolies that are destroying the fabric of all the societies on the globe.
We never in a million years should have allowed these companies to establish global scale communications platforms without the ability to properly moderate them with human intervention and that’s not even to speak of the actual nefarious intent they possess to drive engagement and the anti democratic techno feudalist sickening shit we see from their CEOs like Thiel et al.
They are a plague on our species that makes sport betting apps look like childs play.
>Then throw the executives who authorized such programs in jail.
Gee, I wonder if the executives who are suspected of doing such things haven't spent the last 100 years building the infrastructure necessary to avoid charges, let alone jail time? Large corporate legal departments, wink-wink-nudge-nudge command and control hierarchies where nothing incriminating is ever put into writing, voluminous intra-office communications that bury even the circumstantial evidence so deeply no jury could understand it even if the plaintiffs/state could uncover it, etc.
Anyone over the age of 12 that thinks corporate entities can be made to be accountable in a meaningful way is more than naive. They are cognitively defective. Or is it that you realize they can't be held accountable but you'd rather maintain the status quo than contemplate a country which abolished them and enforced that all business was the conducted by sole proprietorships and (small-n) partnerships?
Undermining data protection and privacy is clearly the point. The fact that it's happening everywhere at the same time makes it look to me like a bunch of leaders got together and decided that online anonymity is a problem.
It's not like kids having access to adult content is a new problem after all. Every western government just decided that we should do something about it at roughly the same time after decades of indifference.
The "age verification" story is casus belli. This is about ID, political dissent, and fears of people being exposed to the wrong brand of propaganda.
You are missing the profit driven angle. Age verification companies and their lobbyists are pouring massive amounts of resources into lobbying for mandatory age verification. And the reason why can be pretty simple. They get richer of off violating people's privacy, especially when those privacy violations are legally required.
How far does it go? Are all bugs features? Shall we assume that Boeing (via MCAS) and Ford (via the Pinto) were trying to kill their passengers? There's a difference between ulterior motive and incompetent execution of expressed intention.
> The fact that it's happening everywhere at the same time makes it look to me like a bunch of leaders got together and decided that online anonymity is a problem.
"People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public." - Adam Smith
No, this is proven false by reducing this theory to the individual level. Anyone who has tried to design/imagine -> actually build something, be it an artist, architect, song writer, programmer, or otherwise, knows there is inevitably a gap between design and realization. No one involved in that process would at any point consider the gap to be part of its “purpose”.
People do hide their intentions but that doesn’t give us a license to reduce complex system dynamics to absurdities.
Bugs get fixed when systems are iterated on. They also tend to be single results from single mistakes, not compound end results of the implementation.
Design features tend to persist.
The phrase/idiom "the purpose of a system is what it does" maps best to situations where a multiple decisions within a system make little sense when viewed through the lens of the stated purpose, but make perfect sense if the actual outcome is the desired one.
It is an invitation to analyze a system while suspending the assumption of good faith on the part of the implementors.
Give our personal devices have the ability to verify our age and identity securely and store on device like they do our fingerprint or face data.
Services that need access only verify it cryptographically. So my iPhone can confirm I’m over 21 for my DoorDash app in the same way it stores my biometric data.
The challenge here is the adoption of these encryption services and whether companies can rely on devices for that for compliance without having to cut off service for those without it set up.
The real problem with this is that the ultimate objective isn't age verification, it's complete de-anonymization. I think different groups want this for different reasons, but the simple reality is that minimizing the identify information transferred around is antithetical to their goals.
If we create age verification tools with strong privacy protections that solve the problems they raise, we can can call their bluff.
If we fight every and any solution, we may end up with their solution, becauase they build it. We end up in the position of saying "don't use the thing they built" without offering alternatives. I'd rather be saying "use whatbwe built, ita is better."
Google/Apple already know where you and your mistress live. In case you pay for any service, they've got your identity too. Ever had a single shipment confirmation to your address come to your mail? They know who you are.
The hardware providers already have the information. You only need to make them reveal it to 3rd parties.
I think this is what my German electronic ID card does. The card connects to an app on my phone via NFC, a service can cryptographically verify a claim about my age, and no additional info is leaked to the service provider or the government.
I think this is actually the correct way to move forward.
We should be able to verify facts about people on the internet without compromising personal data. Giving platforms the ability to select specific demographics will, in my view, make the web a better place. It doesn’t just let us age restrict certain platforms, but can also make them more authentic. I think it’s really important to be able to know some things to be true about users, simply to avoid foreign election interference via trolling, preventing scams and so much more.
With this, enforcement would also be increasingly easy: Platforms just have to prove that they’re using this method, e.g. via audit.
And that your iPhone and other devices become more restrictive if this is not implemented.
I presume most devices in the world do not have a solution to this (desktop windows computer for instance).
I’m not sure if it’s a good idea for like every porn website in the world to require a secret enclave to work. But this sounds better to me than storing users photo ids in an s3 bucket
The solution has always been there: Assume everybody is an adult.
The only reasonable way to deal with children on the Internet is to treat Internet access like access to alcohol/drugs. There is no need for children to access the Internet full stop.
Internet is a network in which everything can connect to everything, and every connected machine can run clients, servers, p2p nodes and what not. Controlling every possible endpoint your child might connect to is not feasible. Shutting the entire network down because "won't somebody please think of the children" is not acceptable.
And, don't let them trick you. This is the endgoal. An unprecedented level of control over the flow of information.
So you would deny children the greatest source of knowledge in the history? I have learned math and programming thanks to unlimited access to the web and would not be where I am without it.
> And the only way to prove that you checked is to keep the data indefinitely.
This is a false premise already; the company can check the age (or have a third party like iDIN [0] do it), then set a marker "this person is 18+" and "we verified it using this method at this date". That should be enough.
And how do they prove to me they (and no 3rd party providers) aren't actually storing the data? I simply don't trust companies telling me they won't store something, so to me the only acceptable option is the data to never leave my device.
If third party does verification with ZKP, you only need to trust that third party. The company that requires verification will not have any data to store.
Nope, as the article notes, it is actually almost never enough because it does not stand up to legal scrutiny. And for good reason: there's no way to conclusively prove that the platform actually verified the user's age, as opposed to simply saying they did, before letting them in.
Most of this debate makes more sense if the actual goal is liability reduction, not child safety. If it were genuinely about protecting kids, you'd regulate infinite scroll and algorithmic engagement optimization, not who can log in.
As we can see from this user who has fallen ill with Epstein Brain, the real victims of social media algorithms are actually adults.
He is currently prepping to overthrow his local Pizzeria while the rest of us argue as if social media even exists anymore (it doesn't, it's just algorithmic TV now).
Most people have only a light grasp of what infinite scroll and algorithmic engagement optimization means. They know they like the scrolling apps more, but it takes a bit of research and education to really understand the specific mechanics and alternatives. We get this well as tech literate but many people using these apps today, are neither tech literate, nor even remember a world before infinite scrolling media was a thing. It seems incredibly obvious mechanism but I've explained it to people, and it takes a few times for it to really sink in and become a specific mental model for how they see the world.
I'm happy they don't because they don't know what they're doing. Hopefully countries prioritizing public health will implement a social media ban for the vulnerable population which gives them some time to grow up without all that garbage poisoning their brains. Then when they're 16 or whatever age, hopefully by that age we'll have realized that this is actually like cigarettes and everyone, all age groups treat it like that.
Better than muddying the waters trying to make it less addictive but then letting them on there when their brains aren't ready.
If the concern is time-wasting, even having upvotes or likes and sorting on them is plenty engaging. I spent thousands of hours as a teenager on Reddit, HN, and the old blue Facebook chronological feed.
I think it's because there's always a group of nosy busybodies finger-wagging about protecting the children and we have to do decorative theatrics to satiate whatever narratives they've convinced themselves of
This is a group particularly beloved by politicians, because you can pretty much use them as a smokescreen whenever you want to pass authoritarian legislation...
I bet that the Chat Control lobbyist groups are involved to some degree, as they tried to require age verification in Chat Control before it was shot down. They probably haven't given up after that defeat.
Interestingly, regulating these would be good for adults as well. A lot of these very large online companies enjoy an asymmetric power advantage. We should aim to protect ourselves against them, in addition to our children.
I think this would be good for everybody, not just kids. It doesn't even have to be complicated: Just that after a certain amount of time scrolling/watching, put in a message asking if it's maybe time to stop with some information about how these algorithms try to keep you for as long as possible. Maybe a link to a government page with more information.
It doesn't have to be perfect and there will of course be easy workarounds to hid the warnings for people that want. The goal is to improve the situation though, not solve it perfectly. Like putting information about the dangers of smoking on packages of smokes; it doesn't stop people from smoking but it does make the danger very easy to learn.
Big tech likes this because there are a lot more face recognition technologies in the wild in real life and being able to connect all real life data to online data is quite valuable. It's also quite possibly the largest training set ever for face recognition if ids are stored and given how ids and images are sold across many companies it seems very high probability that some company will retain the data rather than delete after use.
China (and US via latin american countries and it's own poor people ...via benefit programs access via id.gov) is testing both biometrics and device id to evaluate pros and cons, and to merge data, when it come to autocratic control.
In china there are places to scan you device and get coupons. usually at elevators in residential buildings so they can track also if you're arriving or leaving easily.
In the US every store tracks and report to ad networks your Bluetooth ids. and we know what happens to ad networks.
US now requires cars to report data, which was optional before (e.g. onstar) and china joined on this since the ev boom.
> US now requires cars to report data, which was optional before (e.g. onstar) and china joined on this since the ev boom.
This isn't true, there is no federal requirement for a cellular modem in cars. Most modern cars have one, but nothing prevents you from disabling or removing it. I certainly would not tolerate such a "bug" in by car.
> In the US every store tracks and report to ad networks your Bluetooth ids.
This also isn't true, modern phones randomize Bluetooth identifiers. I personally disable Bluetooth completely.
So don't use big tech. No one needs discord, or porn, or social media. But this is not the answer. The answer is fighting to change the laws. And we can start changing the laws by boycotting big tech. Laws are changed by money flows, not ideology.
Even if you design the perfect system, kids will just ask parents for an unlocked account, many parents will accept, myself included. My kids have full access to the internet and I never used parental control, I talk to them. Of course, I don't want to give parenting advice, that would be presumptuous. But, my point is that a motivated kid will find a way, you have to "work" on that motivation.
Many of the worst present on the internet is not age gated at all, you have millions of porn websites without even a "are you over 18" popup. There are plethora of toxic forums...
Of course it's a complex problem, but the current approach sacrifice a lot of what made the internet possible and I don't like it.
The solution is education. The most well adjusted kids I've seen are told flat out about the risks they'll face and, in general, helped to understand there are break points where things get too serious for them to try to deal with on their own.
I think that if you block all porn, social media, etc. all that does is create an opportunity for kids to be shifted to platforms controlled by bad actors. Adults fall victim to pig butchering schemes where they're given 100% fake investment apps that look completely real and they don't realize they're getting scammed until they try to get their money out of the system. There was a story in Canada about a guy and his daughter that thought they had $1 million in savings and it was a pig butchering scam with a fake app.
Are kids today equipped to deal with that? What happens when someone tells a kid to get app XYZ because it's un-moderated, but that app is controlled by a bad actor? Imagine a Snapchat like platform promising ephemeral messaging with simple username / password on-boarding so parents don't see account creation emails, but the app is run by organized crime.
I don't even know how you handle it if they manage to normalize the idea of children sending ID to random platforms. In addition to getting platform shifted and exploited, kids will be vulnerable to sending their real ID to bad actors.
The whole thing seems insane to me. Spend some money on education. That's the only long-term option.
> Many of the worst present on the internet is not age gated at all, you have millions of porn websites without even a "are you over 18" popup. There are plethora of toxic forums...
This is what I find most insane about the UK's age verification law. It's literally so easy to find adult content without proving your age... You can literally just type in "naked women" into a search engine and get porn...
To call it ineffective would be an understatement. Finding adult content on the web almost just as easy as it's always been. The only thing it's made harder is accessing adult content from the normie-web – you can't access porn on places like Reddit anymore, but you can access porn on 4chan and other dodgy adult sites.
If the argument is "think about the kids" there are more effective ways to do it... Requiring device-level filtering for example would likely be more effective because it could just blacklist domains with hosting adult content unless unrestricted. It would also put more power in the parents hands about what is and what isn't restrict.
>Some observers present privacy-preserving age proofs involving a third party, such as the government, as a solution, but they inherit the same structural flaw: many users who are legally old enough to use a platform do not have government ID.
I'm not convinced age restrictions like this are a good idea. But yeah, the non-availability of IDs in the US is a self-inflicted problem.
Another example where this plays a role are voter registration and ID requirements for voting in the US. It is entirely bizarre to me how these discussions just accept it as a law of nature that it's expensive and a lot of effort to get an ID. This is something that could be changed.
You may underestimate the levels of classism and racism in the US. Go on and bring up a conversation about it and you'll eventually get someone talking about how that would be socialism and we can't do that.
The problem is not that we aren’t doing age verification, it’s that a group of authoritarians are trying to force mandatory implementation of age verification (and concomitant removal of anonymity). That’s the problem.
It seems like the solution is to provide an age verification mechanism with robust privacy protections. That way when we offer a solution that works for all of their states concerns, if they disagree with the privacy preserving approach we force them to say outright "I want to keep a record of every website you visit."
Not my point in the comment but my personal opinion:
To regulate access to addicting material.
This is done in the physical world - why should digital be lawless when it applies to the same human behaviors?
I've been addicted to a lot of digital media parts in harmful ways and I had the luck and support to grow out of most of it. A lot of people are not that lucky.
I don't think that's what the original comment was discussing at all...
If governments want to require private companies to verify ages, those same governments need to provide accessible ways for their citizens to get verification documents, starting from the same age that is required.
What problem? I don't think internet websites and apps actually need to know the face, age, or name of their users if their users don't want to provide that information. With exceptions for things like gambling websites.
For example, with a German ID you can provide proof that you are older than 18 without giving up any identifying information. I mean, nobody uses this system at the moment, but it does exist and it works.
It costs money. Getting an ID here costs about 5% of minimum wage if you order it online + travel (you still have to travel there for the photos and pickup). It costs even more if you apply in person.
You could buy 19 gallons of milk for that money (80 liters).
Exactly right. Also, better to be overly restrictive here given the well documented harms of social media on young minds. If the law stipulates that you must be 15 to obtain social media access, and most people don't get their IDs until 18, then most people will stay off social media for another three years: no big deal.
Non of which is necessary to verify you crossed age threshold. Websites are just lazy, maybe on purpose. Accepting this kind of low effort age verification would be foolish.
Which, adult is merely someone who's just turned 18. What're the chances that any of them just post the thing up on 4chan? (I'm going to go with 100% chance of that happening.)
I worked for a decade in what I would consider the highest level of our kids' privacy ever designed, at PBS KIDS. This was coming off a startup that attempted to do the same for grownups, but failed because of dirty money.
Every security attempt becomes a facade or veil in time, unless it's nothing. Capture nothing, keep nothing, say nothing. Kids are smart AF and will outlearn you faster than you can think. Don't even try to capture PII ever. Watch the waves and follow their flow, make things for them to learn from but be extremely careful how you let the grownups in, and do it in pairs, never alone.
I would like to take the discussion in the other direction. How about we offer safe spaces instead of banning the unsafe spaces for kids.
Similar to how there is specific channels for children on the TV. Perhaps the government can even incentivize such channels. It would also make it easier for parents to monitor and set boundaries. Parents would only need to monitor if the tv is still tuned to disney channel or similar instead of some adult channels.
Similarly this kind of method could be applied to online spaces. Ofcourse there will be some kids that will find ways around it but they will most likely be outliers.
>How about we offer safe spaces instead of banning the unsafe spaces for kids.
Children shouldn't be associating with other children, except in small groups. Even the typical classroom count is far too large. They become the nastiest, most horrible versions of themselves when they congregate. A good 90% of the pathology of public schools can be blamed on the fact that, by definition, public schools require large numbers of children to congregate.
I have no idea where this idea that Internet is toxic to children is coming from. Is that some type of moral panic? Weren't most of you guys children/adolescents during the 2000's?
This is like rhetorically asking, "Are you saying that doom and marylin manson aren't harmful to children?"
The problem with social media isn't the inherent mixing of children and technology, as if web browsers and phones have some action-at-a-distance force that undermines society; it's the 20 years or so they spent weaponizing their products into an infinite Skinner box. Duck walk Zuckerburg.
This is all assuming good faith interest in "the children," which we cannot assume when what government will gain from this is a total, global surveillance state.
Last time I checked there's no scientific consensus if social media causes harm at all. The best studies found null or very small effects. So yeah, I am skeptical it is harmful.
Why is no one talking about using zero knowledge proofs for solving this? Instead of every platform verifying all its users itself (and storing PII on its own servers), a small number of providers could expose an API which provides proof of verification. I'm not sure if some kind of machine vision algorithm could be used in combination with zero-knowledge technology to prevent even that party from storing original documents, but I don't see why not. The companies implementing these measures really seem to be just phoning it in from a privacy perspective.
People are talking about it, at least here anyway.
The reason you don’t see it in policy discussion from the officials pushing these laws is because removal of anonymity is the point. It’s nit about protecting kids, it never was. It’s about surveillance and a chilling effect on speech.
You do see it in policy discussions from officials in the EU. You probably don't see it in policy discussions in the US because the groups that should be telling US officials how to do age verification without giving up anonymity are not doing so.
Zero knowledge proofs are only private and anonymous in theory, and require you to blindly trust a third party. In practice the implementation is not anonymous or private.
Technologists engage in an understandable, but ultimately harmful behavior: when they don't want outcome X, they deny that the technology T(X) works. Consider key escrow, DRM, and durable watermarking alongside age verification. They've all been called cryptographically impossible, but they're not. It's just socially obligatory to pretend they can't be done. And what happens when you create an environment in which the best are under a social taboo against working on certain technologies? Do you think that these technologies stop existing?
LOL.
Of course these technologies keep existing, and you end up with the worst, most wretched people implementing them, and we're all worse off. Concretely, few people are working on ZKPs for age verification because the hive mind of "good people" who know what ZKPs are make working on age verification social anathema.
It's worse than that. Money will continue to flow in to companies like Persona, becauae there is no alternative. Persona will use that money to continue to build profiles on people.
Society would be better off paying cryptography researchers and engineers at NIST.
Parents are competing with multi-trillion dollar companies who have invested untold amounts of cash and resources into making their content addictive. When parents try to help their children, it's an uphill battle -- every platform that has kids on it also tends to have porn, or violence, or other things, as these platform generally have disappointingly ineffective moderation. Most parents turn to age verification because it's the only way they can think of to compete with the likes of Meta or ByteDance, but the issue is that these platforms shouldn't have this content to begin with. Platforms should be smaller -- the same site shouldn't be serving both pornography and my school district's announcement page and my friend's travel pictures. Large platforms are turning their unwillingness to moderate into legal and privacy issues, when in fact it should simply be a matter of "These platforms have adult content, and these ones don't". Then, parents can much more easily ban specific platforms and topics. Right now there's no levers to pull or adjust, and parent s have their hands tied. You can't take kids of Instagram or TikTok -- they will lose their friends. I hate the fact that the "keep up with my extended family" platform is the same as the "brainrot and addiction" one. The platforms need to be small enough that parents actually have choices on what to let in and what not to. Until either platforms are broken up via. antitrust or until the burden of moderation is on the company, we're going to keep getting privacy-infringing solutions.
If you support privacy, you should support antitrust, else we're going to be seeing these same bills again and again and again until parents can effectively protect their children.
Age checks sound simple, but they tend to turn into “please create a permanent ID for the internet.” I’d love a version that’s more like a one-time wristband than a loyalty card.
I ran into this when building a kids' education app a few years ago. We explored a bunch of options, from asking for the last four digits of their parents' SSN (which felt icky, even though it's just a partial number) to knowledge-based authentication (like security questions, but for parents).
Ultimately, we went with a COPPA-compliant verification service, but it added friction to the signup process.
It's a trade-off between security and user experience, and there's no perfect solution, unfortunately.
We are missing accessible cryptographic infrastructure for human identity verification.
For age verification specifically, the only information that services need proof of is that the users age is above a certain threshold. i.e. that the user is 14 years or older. But in order to make this determination, we see services asking for government ID (which many 14-year-olds do not have), or for invasive face scans. These methods provide far more data than necessary.
What the service needs to "prove" in this case is three things:
1. that the user meets the age predicate
2. that the identity used to meet the age predicate is validated by some authority
3. that the identity is not being reused across many accounts
All the technologies exist for this, we just haven't put them together usefully. Zero knowledge proofs, like Groth16 or STARKs allow for statements about data to be validated externally without revealing the data itself. These are difficult for engineers to use, let alone consumers. Big opportunity for someone to build an authority here.
>We are missing accessible cryptographic infrastructure for human identity verification.
like most proposed solutions, this just seems overcomplicated. we don't need "accessible cryptographic infrastructure for human identity". society has had age-restricted products forever. just piggy-back on that infrastructure.
1) government makes a database of valid "over 18" unique identifiers (UUIDs)
2) government provides tokens with a unique identifier on it to various stores that already sell age-restricted products (e.g. gas stations, liquor stores)
3) people buy a token from the store, only having to show their ID to the store clerk that they already show their ID to for smokes (no peter thiel required)
4) website accepts the token and queries the government database and sees "yep, over 18"
easy. all the laws are in place already. all the infrastructure is in place. no need for fancy zero-knowledge proofs or on-device whatevers.
What you’re describing is infrastructure that doesn’t necessarily exist right now for use online, and has all the privacy problems described. Why should I have to share more than required?
The government will want some way to uncover who bought the token. They'll probably require the store to record the ID and pretend like since it's a private entity doing it, that it isn't a 4A violation. Then as soon as the token is used for something illegal they'll follow the chain of custody of the token and find out who bought it.
No matter what the actual mechanism is, I guarantee they will insist on something like that.
A significant obstacle to adoption is that cryptographic research aims for a perfect system that overshadows simpler, less private approaches. For instance, it does not seem that one should really need unlinkability across sessions. If that's the case, a simple range proof for a commitment encoding the birth year is sufficient to prove eligibility for age, where the commitment is static and signed by a trusted third party to actually encode the correct year.
I agree. I've been researching a lot of this tech lately as a part of a C2PA / content authenticity project and it's clear that the math are outrunning practicality in a lot of cases.
As it is we're seeing companies capture IDs and face scans and it's incredibly invasive relative to the need - "prove your birth year is in range". Getting hung up on unlinkable sessions is missing the forest for the trees.
At this point I think the challenge has less to do with the crypto primitives and more to do with building infrastructure that hides 100% of the complexity of identity validation from users. My state already has a gov't ID that can be added to an apple wallet. Extending that to support proofs about identity without requiring users to unmask huge amounts of personal information would be valuable in its own right.
Your crypto nerd dream is vulnerable to the fact that someone under 18 can just ask someone over 18 to make an account for them. All age verification is broken in this way.
There is a similar problem for people using apps like Ubereats to work illegally by buying an account from someone else. However much verification you put in, you don't know who is pressing the buttons on the screen unless you make the process very invasive.
You seem to have missed requirement #3 -> tracking and identifying reuse.
An 18-year-old creating an account for a 12-year-old is a legal issue, not a service provider issue. How does a gas station keep a 21-year-old from buying beer for a bunch of high school students? Generally they don't, because that's the cops' job. But if they have knowledge that the 21-yo is buying booze for children, they deny custom to the 21-yo. This is simple.
Even if the problem is perfectly solved to anonymize the ID linked to the age, you still have the issue that you need an ID to exercise your first amendment right. 1A applies to all people, not just citizens, and it's considered racist in a large part of the US to force someone to possess an ID to prove you are a citizen (to vote) let alone a person (who is >= 18y/o) w/ 1A rights.
In general, any government already has your information, and it's naive to think that they don't; if you pay taxes, have ever had a passport, etc. they already have all identifying information that they could need. For services, or for the government knowing what you do (which services you visit), then a zero-knowledge proof would work in this case.
> Some observers present privacy-preserving age proofs involving a third party, such as the government, as a solution, but they inherit the same structural flaw: many users who are legally old enough to use a platform do not have government ID. In countries where the minimum age for social media is lower than the age at which ID is issued, platforms face a choice between excluding lawful users and monitoring everyone. Right now, companies are making that choice quietly, after building systems and normalizing behavior that protects them from the greater legal risks. Age-restriction laws are not just about kids and screens. They are reshaping how identity, privacy, and access work on the Internet for everyone.
This rebuttal to privacy preserving approaches isn't compelling. Websites can split the difference and use privacy preserving techniques when available, and fall back to other methods when the user doesn't have an ID. I'd go further and say websites should be required to prioritize privacy preserving techniques where available.
There is a separate issue of improving access to government ID. I think that is important for reasons outside of age verification. Increasingly voting, banking, etc... already relies on having an ID.
Does each service really need to collect this data from the user directly? They could instead have the user authorise them by e.g. OAuth2 to access their age with one of the de-facto online-identity-providers. I would be surprised if they didn't implement an API for this sometime soon, cause it would place them as the source of truth and give them unique access to that bit of user data. Seems like a chance and position they wouldn't want to lose.
The purpose is to control the Internet. They've been trying this for ages. They tried with terrorism and other things. Now the excuse is protecting children.
Not exactly a good moment for this caste of politicians to pretend they care about children's well-being, though.
Social media, at least, are reaching the state the smoking had in the 80's when it started to be banned. After an era of praising (00's) scepticism followed (10's) until an underage ban comes (20's). Maybe in the 30's they will be banned in public spaces!
The problem of identifying a value for each person is very difficult. But government's role stops there. Until the teenager's screen more factors stay in the middle (parents, peers, criminals). I am curious how it turns out eventually. As a parent, I have already banned SM for my children, so not "affected" by the new policy.
Ban in public spaces might not make sense since there isn't a proximity hazard like smoke. But I can imagine after kids are banned from social media and we see how this was an obviously positive effect on their mental health, we will begin to acknowledge that this stuff is toxic for adults as well.
We could start to ban many of the mechanisms social media companies have deployed over the last 10 years. Infinite scrolling, algorithmic feeds of "creator" content, AI generated ragebait from bot accounts, etc. I'd love to see social media reverted back to when it was just holiday photos from your friends.
"Age-restriction laws push platforms toward intrusive verification systems that often directly conflict with modern data-privacy law" - when you make rules contradictory, someone always violate these laws, and you can use selective persecution to "convince" companies to favor you, the incumbent politician. You don't even have to use such power, just a "joke" may be enough to send have any rational CEO licking your shoes.
European proponents of "anti-big-tech action" make it pretty explicit - broad discretionary power should be given to executive branch, because otherwise "international corporations" will use "loopholes" (and these "loopholes" are, in practice, explicitly written laws used as intended).
Someone explain me like I'm 5: there are some solutions already in effect that are based on cryptographically generated, anonymous, one-time use tokens that allow to confirm adults's age without being tied up yo your ID. Why on earth even technically skilled people completely ignore those? Is this pure NIMBY ignorance or am missing something?
Because those solutions always have obvious flaws. If the cryptographic token is anonymous, how do you know the user verifying is the same one who generated the token? How do you know the same cryptographic key isn't verifying several accounts belonging to other people?
They are one time use by definition. You can't know they are used by respectful owner, but the idea is you have to provide a new token every few weeks/months. Much like when using other services nowadays, I mean even Gmail will have you authorize every few months even if you didn't log out. Plus you fine/prosecute those who sell/misuse theirs. Just like you prosecute adults who buy kids alcohol or other substances.
Obvious flaws are OK. I absolutely hate the Nirvana fallacy that you people think is acceptable here, while hundreds of millions of kids suffer from serious developmental issues, as reported left and right by all kinds of organizations and governments themselves.
It's kind of weird to me how every article on this topic here has people rushing to comment within a couple minutes with some generic "yes I too support ID checks for internet use!". Has the vibe really shifted so much among tech-literate people?
Although there is some organic support, there is a lot of coordinated astroturfing. It’s apparent if you watch the discussions across platforms, there are obvious shared talking points that come in waves.
Governments (and a few companies) really want this.
What are some links to HN comments that you (or anyone else) feel is "coordinated astroturfing"?
The site guidelines ask users to send those to us at hn@ycombinator.com rather than post about it in the threads, but we always look into such cases when people send them.
It almost invariably turns out to simply be that the community is divided on a topic, and this is usually demonstrable even from the public data (such as comment histories). However, we're not welded to that position—if the data change, we can too.
> Governments (and a few companies) really want this.
The cynic in me fears they don't want a privacy-preserving solution, which blinds them to 'who'. Because that would satisfy parents worried about their kids and many privacy conscious folks.
Rather, they want a blank check to blackmail or imprison only their opponents.
> It’s apparent if you watch the discussions across platforms, there are obvious shared talking points that come in waves.
How do you know what is "shared talking points" vs "humans learning arguments from others" and simply echoing those? Unless you work at one of the social media platforms, isn't it short of impossible to know what exactly you're looking at?
> there are obvious shared talking points that come in waves.
Groups of people who wake up at the same time of the day often have a tendency to be from a similar place, hold similar values and consume similar media.
Just because a bunch of people came to the same conclusion and have had their opinions coalesce around some common ideas, doesn't mean it's astroturfing. There's a noticeable difference between the opinions of HN USA and HN EU as the timezones shift.
More than a few companies. Nothing would allow advertisers to justify raising ad rates quite like being able to point out that their users are real rather than bots.
> there is a lot of coordinated astroturfing. It’s apparent if you watch the discussions across platforms, there are obvious shared talking points that come in waves.
Is that really evidence of astroturfing? If we're in the middle of an ongoing political debate, it doesn't seem that far fetched for me that people reach similar conclusions. What you're hearing then isn't "astro-turfing" but one coalition, of potentially many.
I often hear people terrified that the government will have a say on what they view online, while being just fine with google doing the same. You can agree or disagree with my assesment, but the point is that hearing that point a bunch doesn't mean it's google astroturfing. It just means there's an ideology out there that thinks it's different (and more opressive seemingly) when governments do it. It means all those people have a similar opinion, probably from reading the same blogs.
"Real" user verification is a wet dream to googlr, meta, etc. Its both a ad inflation and a competive roadblock.
The benefits are real: teens are being preyed upon and socially maligned. State actors and businesses alike are responsible.
The technology is not there nor are governments coordinating appropiate digital concerns. Unsurprising because no one trusts gov, but then implicitly trust business?
Yeah, so obviously, its implementation that will just move around harms.
I think we should be careful of writing off this sea change as simple professional influence campaigns. That kind of thinking is just what got Trump to the Whitehouse, and is currently getting the immigrants rounded up.
Things that didn't seem likely to have broad support previously, now are seen as acceptable. In the 90's no one could envision rounding up immigrants. No one could envision uploading an ID card to use ICQ. No one could envision the concept of DE-naturalization or getting rid of birthright citizenship.
Today, in the US for instance, there are entire new generations of people alive. And many, many people who were alive in the 90's are gone. Well these new people very much can envision these things. And they seem to have stocked the Supreme Court to make all these kinds of things a reality.
All because the rest of us keep dismissing all of this as just harmless extreme positions that no one in society really supports. We have to start fighting things like this with more than, "It's not real."
The industry clearly prefers a system in which using the internet requires full identification. There are many powerful interests that support this model:
- Governments benefit from easier monitoring and enforcement.
- The advertising industry prefers verified identities for better targeting.
- Social media companies gain more reliable data and engagement.
- Online shopping companies can reduce fraud and increase tracking.
- Many SaaS companies would also welcome stronger identity verification.
In short, anonymity is not very profitable, and governments often favor identification because it increases oversight and control.
Of course, this leads to political debate. Some point out that voting often does not require ID, while accessing online services does. The usual argument is that voting is a constitutional right. However, one could argue that access to the internet has become a fundamental part of modern life as well. It may not be explicitly written into the Constitution, but in practice it functions as an essential right in today’s society.
You are missing the age/ID verification tech companies, who profit from violating privacy here. They have a strong incentive to try and convince/trick governments into legally requiring their services.
Realizing that much of the internet is totally toxic to children now and should have a means of keeping them out is distinct from agreeing to upload ID to everything.
A better implementation would be to have a device/login level parental control setting that passed age restriction signals via browsers and App Stores. This is both a simpler design and privacy friendly.
I like this take. Ultimately the only people responsible for what kids consume are the parents. It’s on them to control their kids’ internet access, the government has no place in it. If you want to punish someone for a child being exposed to inappropriate content, punish the negligence of the parents.
At least here in US: Google/Apple device controls allow app to request whether user meets age requirements. Not the actual age, just that the age is within the acceptable range. If so, let through, if not, can't proceed through door.
I know I am oversimplifying.
But I like this approach vs. uploading an ID to TikTok. Lesser of many evils?
It doesn't sound simple. Now there needs to be some kind of pipeline that can route a new kind of information from the OS (perhaps from a physical device) to the process, through the network, to the remote process. Every part of the system needs to be updated in order to support this new functionality.
I think it's quite embarrassing that the WWW exists since more than 3 decades and still there's no mechanism for privacy friendly approval for adults apart from sending over the whole ID. Of course this is a huge failure of governments but probably also of W3C which rather suggests the 100,000th JavaScript API. Especially in times of ubiquitous SSO, passkeys etc. The even bigger problem is that the average person needs accounts at dozens if not hundreds of services for "normal" Internet usage.
That being said, this is a 1 bit information, adult in current legislation yes/no.
> and still there's no mechanism for privacy friendly approval for adults apart from sending over the whole ID. Of course this is a huge failure of governments but probably also of W3C
I consider it a huge success of the Internet architects that we were able to create a protocol and online culture resilient for over 3 decades to this legacy meatspace nonsense.
> That being said, this is a 1 bit information, adult in current legislation yes/no.
If that's all it would take to satisfy legislatures forever, and the implementation was left up to the browser (`return 1`) I'd be all for it. Unfortunately the political interests here want way more than that.
SSO and passkeys don't solve adult verification. I don't see how this problem is embarrassing for the www - it's a hard problem in a socially permissible way (eg privacy) that can successfully span cultures and governments. If you feel otherwise, then solutions welcome!
If you dig into the CEO of the The Age Verification Providers Association (AVPA), he has spent years going out of his way to spam pro-age verification propaganda across random sites like Techdirt and Michael Geist's blog.
Its not unreasonable to assume that he would seek to automate his bullshit.
There are better and there are really really bad ways to do ID checks. In a world that is increasingly overwhelmed by bots I don't see how we can avoid proof-of-humanity and proof-of-adulthood checks in a lot of contexts.
So we should probably get ahead of this debate and push for good ways to do part-of-identity-checks. Because I don't see any good way to avoid them.
We could potentially do ID checks that only show exactly what the receiver needs to know and nothing else.
> We could potentially do ID checks that only show exactly what the receiver needs to know and nothing else.
A stronger statement: we know how to build zero-knowledge proofs over government-issued identification, cf. https://zkpassport.id/
The services that use these proofs then need to implement that only one device can be logged in with a given identity at a time, plus some basic rate limiting on logins, and the problem is solved.
When ID checks are rolled out there is immediate outrage. Discord announced ID checks for some features a couple weeks ago and it has been a non-stop topic here.
From what I’ve seen, most of the pro-ID commenters are coming from positions where they assume ID checks will only apply to other people, not them. They want services they don’t use like TikTok and Facebook to become strict, but they have their own definitions of social media that exclude platforms they use like Discord and Hacker News. When the ID checks arrive and impact them they’re outraged.
I think that not doing partial-identity checks invite bot noise into conversations. We could have id checks that only check exactly what needs to be checked. Are you human? Are you an adult? And then nothing else is known.
We have a Scylla vs Charybdis situation, where lack of ID leads to an internet of bots, while on the other end we get a dystopia where everything anyone has ever said about any topic is available to a not-so-liberal government. Back in the day, it was very clear that the second problem was far worse than the first. I still think it is, but I sure see arguments for how improved tooling, and more value in manipulating sentiment, makes the first one quite a bit worse than it was in, say, 1998.
I think a lot of the younger generation supports it, actually. They didn't really grow up with a culture of internet anonymity and some degree of privacy.
The younger generation is growing up where the internet is a giant dumpster fire of enshitification that a tanker full of gasoline just got poured on in the form of AI chatbots. With agents becoming even easier the equivalent of script kiddies are going to make it so much worse.
Privacy with respect to the government was one of the final pillars, but when everything placed on the internet is absorbed by the alphabets of government agencies, and the current admin does searches of it as their leisure they understand nothing is anonymous anymore.
It's funny that this is what the younger generation is going to think Millennials and older are completely stupid for still supporting. The current structure only benefits corporations and bots.
A lot of people are unhappy with the state of the Internet and the safety of people of all ages on it. I believe we should be focusing on building a way to authenticate as a human of a nation without providing any more information, and try to raise the bar for astroturfing to be identity theft.
There's absolutely some astroturfing happening, but I wouldn't discount that there is some organic support as well. Journalists have been pushing total de-anonymization of the internet for a while now, and there are plenty of people susceptible to listening to them.
It does feel like a shift, and sometimes coordinated signaling. This post was at the top of this thread not long ago. Now it's all pro-age verification posts.
It's inauthentic at best. The four horsemen of the infocaplypse are drugs, pedos, terrorists, and money laundering - they trot out the same old tired "protect the children!" arguments every year, and every year it's never, ever about protecting children, it's about increasing control of speech and stamping out politics, ideology, and culture they disapprove of. For a recent example, check out the UK's once thriving small forum culture, the innumerable hobby websites, collections of esoteric trivia, small sites that simply could not bear the onerous requirements imposed by the tinpot tyrants and bureaucrats and the OSA.
It's never fucking safety, or protecting children, or preventing fraud, or preventing terrorism, or preventing drugs or money laundering or gang activities. It's always, 100% of the time, inevitably, without exception, a tool used by petty bureaucrats and power hungry politicians to exert power and control over the citizens they are supposed to represent.
They might use it on a couple of token examples for propaganda purposes, but if you look throughout the world where laws like this are implemented, authoritarian countries and western "democracies" alike, these laws are used to control locals. It's almost refreshingly straightforward and honest when a country just does the authoritarian things, instead of doing all the weaselly mental gymnastics to justify their power grabs.
People who support this are ignorant or ideologically aligned with authoritarianism. There's no middle ground; anonymity and privacy are liberty and freedom. If you can't have the former you won't have the latter.
> Has the vibe really shifted so much among tech-literate people?
Actually, yes, it seems to have shifted quite a bit. As far as I can tell, it seems correlated with the amount of mis/disinformation on the web, and acceptance of more fringe views, that seems to make one group more vocal about wanting to ensure only "real people" share what they think on the internet, and a sub-section of that group wanting to enforce this "real name" policy too.
It in itself used to be fringe, but really been catching on in mainstream circles, where people tend to ask themselves "But I don't have anything to hide, and I already use my real name, why cannot everyone do so too?"
I don't support ID for internet use, only for adult content specifically. There's things on Discord that would shock you to your core if you saw some of it, I don't think children should be blindly exposed to any of it. Specifically porn. Tumblr almost got kicked out of the app store over porn, they went the route of banning it and killing what to me felt like a dying social media platform as things stood.
Do you think strip clubs and bars should stop IDing people at the door? I don't. Why should porn sites be any different?
The difference is that at the strip club, you show your ID to the bouncer, who makes sure its valid and that the photo matches your face, and then forgets all about it. Online, that data is stored forever.
The principle of online ID checks is completely sound; the implementation is not.
Until sex, sexuality, and sexual fantasies no longer have any potential stigma associated with them, any sort of verification to view mature content is unacceptable.
That sort information can permanently destroy peoples lives.
The average tech literate person keep seeing their data breached over and over again. Not because THEY did anything wrong, but because these Corpos can't help themselves. No matter how well the tech literate person secures their privacy it has become clear that some Corpo will eventually release everything in an "accident" that causes their efforts to become meaningless.
After a while it's only human for fatigue to build up. You can't stop your information from getting out there. And once it's out there it's out there forever.
Meanwhile every Corpo out there in tech is deliberately creating ways to track you and extract your personal information. Taking steps to secure your information ironically just makes you stand out more and narrows the pool you're in to make it easier to find you and your information. And again you're always just one "bug" from having it all be for nothing.
I still take some steps to secure my privacy, I'm not out there shouting my social security information or real name. But that's habit. I no longer believe that privacy exists.
To the extent we ever had it in the past was simply the insurmountable restrictions on tracking and pooling the information into some kind of organization and easy lookup. Now that it is easier and easier to build profiles on mass numbers of people and to organize those and rank them the illusion is gone. Privacy is dead. Murdered.
People are saying privacy is dead for decades, yet privacy continually declines more and more. And there's still quite a ways it can go from here. The defeatist attitude only helps further erosion.
> Has the vibe really shifted so much among tech-literate people?
HN has largely shifted away from tech literacy and towards business literacy in recent years.
Needless to say that an internet where every user's real identity is easily verifiable at all times is very beneficial for most businesses, so it's natural to see that stance here.
You talk about tech literacy, but then conflate age verification with knowing someone's identity. You should see the work people are doing to perform age verification in a way that preserves privacy, for example the EU and Denmark.
I mean there has always been some part of the tech literate people that were like that, they were just less likely to post about it on forums. Heck after the eternal September it wasn't uncommon for 'jokes' about requiring a license to use the internet.
Its weird how all these 1,000 IQ innovators suddenly can't figure it out.
I dont think they want to figure it out. They think the internet should be stagnant unchanging and eternal as it currently exists because it makes the most money. If you disagree you're either a normie, bot, or need to parent harder or something. There is nothing you can do don't dare try to change it.
Beware the vocal minority. Internet comment sections only tell you the sentiments of people who make comments.
HN comments sentiment seems to shift over the age of the thread and time of day.
My suspicion is that the initial comments are from people in the immediate social circle of the poster. They share IRC or Slack or Discord or some other community which is likely to be engaged and have already formed strong opinions. Then if the story gains traction and reaches the front page a more diverse and thoughtful group weighs in. Finally the story moves to EU or US and gets a whole new fresh take.
I’m not surprised that people who support something are the ones most tuned in to the discussion because for anyone opposed they also have their own unrelated thing they care about. So the supporters will be first since they’re the originators.
Many of the things you mention are also tools that many people use in a professional context which mostly doesn't work if you try to be anonymous. Yes, some people choose to be pseudonymous but that mostly doesn't work if your real-life and virtual identities intersect, such as attending conferences or company policies that things you write for company publications be under your real name.
I highly doubt the sentiment is from real humans. If anything, it proves that a web-of-trust-based-attestation-of-humanity is the real protection the internet needs.
The vibe has shifted quite a bit among the general populace, not just in tech.
The short version is that voters want government to bring tech to heel.
From what I see, people are tired of tech, social media, and enshittified apps. AI hype, talk of the singularity, and fears about job loss have pushed things well past grim.
Recent social media bans indicate how far voter tolerance for control and regulation has shifted.
This is problematic because government is also looking for reasons to do so. Partly because big tech is simply dominant, and partly because governments are trending toward authoritarianism.
The solution would have been research that helped create targeted and effective policy. Unfortunately, tech (especially social media) is naturally hostile to research that may paint its work as unhealthy or harmful.
Tech firms are burned by exposés, user apathy, and a desire to keep getting paid.
The lack of open research and access to data blocks the creation of knowledge and empirical evidence, which are the cornerstones of nuanced, narrowly tailored policy.
The only things left on the table are blunt instruments, such as age verification.
> Has the vibe really shifted so much among tech-literate people?
Yes.
Or more honestly, there was always an undercurrent of paternalistic thought and tech regulation from the Columbine Massacre days [0] to today.
Also for those of us who are younger (below 35) we grew up in an era where anonymized cyberbullying was normalized [1] and amongst whom support for regulating social media and the internet is stronger [2].
The reality is, younger Americans on both sides of the aisle now support a more expansive government, but for their party.
There is a second order impact of course, but most Americans (younger and older) don't realize that, and frankly, using the same kind of rhetoric from the Assange/Wikileaks, SOPA, and the GPG days just doesn't resonate and is out of touch.
Gen X Techno-libertarianism isn't counterculture anymore, it's the status quo. And the modern "tech-literate" uses GitHub, LinkedIn, Venmo, Discord, TikTok, Netflix, and other services that are already attached to their identity.
It’s bots pushing another false narrative. You’ll notice this in anything around politics or intelligence the past 10+ years, with big booms around 2016 and 2024 “for some reason”
No. There are significant numbers of real people who genuinely support this type of thing. Dismissing it as "bots" or a "false narrative" leads to complacency that allows this stuff to pass unchallenged.
I used to be so against this but after the never ending cat and mouse game with my kids (especially my son) I don't think the tech crowd really appreciates how frustrating it is and how many different screens there are.
Tons of data also showing higher suicide rates, depression rates, eating disorders etc. so it's not as if there is no good side to this.
If they are so intent on disobeying what makes you they won't just use a VPN or ask someone older to login for them (or any other workaround, depending on the technology)?
I was a child of the 90's, where the numbers were higher, where we had peak PMRC.
> depression rates
Have these changed? Or have we changed the criteria for what qualifies as "depression"? We keep changing how we collect data, and then dont renormalized when that collection skews the stats. This is another case of it, honestly.
> eating disorders
Any sort of accurate data collection here is a recent development:
As a tech-literate person, I'm not 100% against the concept of ID if only because I think people will be more reasonable if they weren't anonymous.
This conflicts with my concerns about government crackdowns and the importance of anonymity when discussing topics that cover people who have a monopoly on violence and a tendency to use it.
So it's not entirely a black/white discussion to me.
Both Google and Facebook have enforced real identity and its not improved the state of peoples comments at all. I don't think anonymity particular changes what many people are willing to say or how they say it, people are just the creature you see and anonymity simple protects them it doesn't change their behaviour all that much.
I think opt-in ID is great. Services like Discord can require ID because they are private services*. Furthermore, I think that in the future, a majority of people will stay on services with some form of verification, because the anonymous internet is noisy and scary.
The underlying internet should remain anonymous. People should remain able to communicate anonymously with consenting parties, send private DMs and create private group chats, and create their own service with their own form of identity verification.
* All big services are unlikely to require ID without laws, because any that does not will get refugees, or if all big services collaborate, a new service will get all refugees.
The problem is this is only true for values of "reasonable" that are "unlikely to be viewed in a negative light by my government, job, or family; either now or at any time in the future". The chilling effect is insane. There was a time in living memory when saying "women should be able to vote" was not a popular thing.
I mean, this is _literally the only thing needed_ for the Trump admin to tie real names to people criticizing $whatever. Does anyone want that? Replace "Trump" with "Biden", "AOC", "Newsom", etc. if they're the ones you disagree with.
When you’re young, the overwhelming and irrepressible desire to overcome society's proscriptions to satisfy your intellectual and sexual curiosity is natural and understandable. The open Internet made that easier than ever, and I enjoyed that freedom when I was younger—though I can’t say it was totally harmless.
When you’re older and have children—especially preteens and teenagers—you want those barriers up, because you’ve seen just how fucked up some children can get after overexposure to unhealthy materials and people who want to exploit or harm them.
It’s a matter of perspective and experience. As adults age, their natural curiosity evolves into a desire to protect their children from harm.
The only thing this is going to achieve is to bar unverified users form the vaguely reputable and mainstream places into the small, completely unregulated spaces, sites and networks.
I presume you prefer hard requirement of IDs.
I'm saying this will make kids go to i2p, tor, to the obscure fora in countries not giving a f* about western laws.
As a parent to the teens and teens, THIS makes me concerned. The best vpns are very hard to detect (I know, I try it myself).
> When you’re older and have children—especially preteens and teenagers—you want those barriers up, because you’ve seen just how fucked up some children can get after overexposure to unhealthy materials.
You mean that you shirk your responsibility to teach your child how to protect themself on the Internet, and instead trust the faceless corp to limit their access at the cost of everyone's privacy? How does this make sense...
Some can be 50 and still be clueless who to trust and what to do.
Every kind of discrimination merely shifts the burden.
And thinking of the children as an excuse for draconian law is itself child abuse. It's using children as a shield to take cover.
EDIT: I'd like to add that if age verification becomes a thing, we should also have an online drug test, insurance verification, financial wellbeing tests, mental health checks and a badge of dishonor for anyone who fails to comply.
Not really. If you take their rhetoric at face value ("Think of the children") then sure, it undermines everyone's data protection.
I will go as far as to assume that no one on HN believes this is done for the children. It's been done to censor people and ID the majority of normies online. And when you think of this, the undermining of everyone's data protection is note an undesired side effect, it's the goal.
Is this article AI-generated?
https://www.pangram.com/history/f421130b-eefc-4f8c-b380-da0a... . I find it worrying that authors don't disclose the amount of AI assistance used in drafting and editing upfront. It sets a worrying precedent where everything is AI unless proven otherwise.
Age Verification is very hard to do without exposing personal information (ask me how I know). I feel it should be solved by a platform company - someone like Apple (assuming we trust apple with our personal information but seems like we already do) - and the platform (ios) should be able to simply provide a boolean response to "is this person over 18" without giving away all the personal information behind the age verification.
Now the issue of which properties can "ask to verify your age" and "apple now knows what you're looking at" is still an unsolved problem, but maybe that solution can be delivered by something like a one time offline token etc.
But again, this is a very hard problem to solve and I would personally like to not have companies verify age etc.
Isn't it a simpler solution to create some protocol for a browser or device announce an age restricted user is present and then have parents lock down devices as they see fit?
Aside from the privacy concerns, all this age verification tech seems incredibly complicated and expensive.
I think this solution exists (e.g. android parental lock, but also ISP routers). But parents and industry have failed to do so on a greater scale. So legislation is going for a more affirmative action that doesn't require parental consent or collaboration.
A service provider of adult content now cannot serve a child, regardless of the involvement or lack thereof of a parent.
I'm certain there is a way to verify age without compromise of privacy or identity. I'm sure it's possible to build some oAuth like flow that could allow sites to verify both human-ness and age. The systems and corporations that gate that MUST (in the RFC sense) be separate from the systems and corporations that want the verification.
Do we need laws to make this happen? What methods can be used to aid adoption? Do site operators really want to know the humanness and ages or are those just masks on adding more surveillance?
I really do think age verification should be at the device level, not per app or website. Parents can and should lock down their children's devices. I know device manufacturers don't want to take on that legal burden, but the tech is _right there!_
The thing that needs to be age banned, or really just banned, is algorithmic feeds with infinite scroll. Kids (and adults) need to just interact with their friends, and block all the bait.
What's always got me about this is when I was in school I had it absolutely drilled into me that I should never expose personal information online to anyone, I completely saw the logic in that and so heavily limit the personal data I give out. Now we're just expecting people to completely go against that and give away the most personal details possible to companies who cannot prove what they are or are not doing with it just because governments have decided that's best now?
Starts off with a flawed assumption, playing into the hands of people who want surveillance.
"The only way to prove that someone is old enough to use a site is to collect personal data about who they are. And the only way to prove that you checked is to keep the data indefinitely."
If you start by legislating that you can't collect personal data or ID, then you are forced to do your age verification through other means. And legislate the government can't see what websites a user is visiting if you can to stop overreach. End result is a workable solution, zero knowledge proof or similar where government (the source of your ID documents) signs a token brokered by a proxy.
But when you start arguments from the position of 'no way to do this without violating privacy', the end result will be to violate privacy, because it seems an awful lot of people are demanding age verification and will sacrifice if they believe it is necessary.
The elephant in the room is 'unverified' users will overwhelmingly be underage kids, and that absence will be tracked across the internet. This whole thing inadvertently exposes who are the kids vs the adults programmatically.
Second, if all it takes to get into underage spaces is not being verified, predators *will* notice and exploit this hole.
Even the absence of information is information.
> The Roblox games site, which recently launched a new age-estimate system, is already suffering from users selling child-aged accounts to adult predators seeking entry to age-restricted areas, Wired reports.
> Second, if all it takes to get into underage spaces is not being verified, predators *will* notice and exploit this hole.
Well the default state is "assume underage". So the default state is be in same location as children. There's nothing for predators to exploit, they get access by default.
Which once people realize that, it all becomes really silly. The only way it would really work is by verifying that people are children, so only children can be in the gated location. But then you need to do mass surveillance on children and I think even the average person realizes this just makes that a great place for predators and the damage caused by a leak is far greater to children. Not to mention the impractical nature of it as children are less likely to able to verify themselves and honestly, you expect kids to jump through extra hoops?
Anyone that believes these systems will keep predators away from children haven't thought about even the most basic aspects of how these systems work. They cannot do what they promise
kids will just use their parents' credentials. it's not an edge case, it will actually just be a default behavior, I promise. platforms need to be safe by default, but that's the problem - safe by default doesn't play nice with engagement. platforms need engagement for profit. chicken and egg.
Well there are technical solutions for this: blind signatures.
I could generate my own key, have the government blind sign it upon verifying my identity, and then use my key to prove I'm an adult citizen, without anyone (even the signing government) know which key is mine.
Any veryfying entity just need to know the government public key and check it signed my key.
The ID check laws are about matching an identity to a user account.
If the identity check was blind it wouldn't actually be an identity check. It would be "this person has access to an adult identity".
If there is truly no logging or centralization, there is no limit on how many times a single ID could be used.
So all it takes is one of those adult blind signatures to be leaked online and all the kids use it to verify their accounts. It's a blind process, so there's no way to see if it's happening.
Even if there was a block list, you would get older siblings doing it for all of their younger siblings' friends because there is no consequence. Or kids stealing their parents' signature and using it for all of their friends.
The internet isn’t the same as it was when we were growing up, unfortunately. I miss the days of cruising DynamicHTML while playing on GameSpy but… yeah. It became an absolute clusterfuck and I’m not surprised they now want to enforce age restrictions.
Maybe TBL is right and we need a new internet? I don’t have the answer here, but this one is too commercialized and these companies are very hawkish.
The powers that be are 'correcting' the mistake of the anonymous internet where anyone can call a politician fat and the police can't arrest them for it.
In my experience the people who want "privacy preserving age verification" are the same people who want "encryption backdoors but only for the good guys." Shockingly the technically minded among them do seem to recognize the impossibility of the latter, without applying the same chain of thought to the former.
They are fundementally different problems. It is already the governments job to maintain a record of their citizens and basic demographic information like age.
Private actors are already offering verification as a paid service. They are accumulating vast troves of private date to offer the service.
This is my problem with the Discord situation too:
Big tech don't have wait for an outright government ban when they can just say that we are a teen-only site by default and everyone have to verify if they are over 18 or not. This age verification will affect everyone no matter what.
people really believe this coordinated push across jurisdictions is about kids and verifying their age? this excuse to try to end pseudonimity on the web is as old as the mainstream internet itself
to a lot of people it never sat well that people could just go online and say whatever they want, and communicate with each other unsupervised at large scale, and be effectively untargetable while doing so - that model of the internet was only allowed because it happened under the radar and those uncomfortable with it have been fighting it since they got the memo
the companies pushing hardest for age verification are the same ones whose business model depends on knowing exactly who you are. the child safety framing is convenient cover for a data collection problem they were already trying to solve.
I‘m not too knowledgeable about this, but couldn’t you just provide a government issued key to every citizen and give a service provider that key and it‘s only valid if you’re above a certain age?
I don't get the alcohol analogy as in most places it's 100% legal for minors to consume alcohol in the home with parental permission in the USA. In public it's a different story.
"Verifying age undermines everyone's data protection"
That's the whole point, right? A pretense to remove any remaining anonymity from communications?
Governments are endlessly infested with the worst people. They look back at historical attempts at totalitarianism and think to themselves, "Let's facilitate something like that, but worse".
good breakdown of the tradeoffs but it kinda stops at critique. explains why age verification becomes invasive but doesnt really propose what a workable alternative looks like. feels like we need concrete models or architectures not just pointing out the trap.
A lot of talk and no solutions. Exactly the reason we are where we are.
Liquor stores, bars, strip clubs, adult bookstores, or similar businesses don't let kids in. Movie theatres don't let a 10 year old in to an R-rated movie. The tech industry ignored their social responsibility to keep kids away from adult and age-inappropriate content. Now, they are facing legal requirements to do so. Tough for them, but they could have been more proactive.
Everything is a trade off in the world. I think that people who are anti-id ignore this but for me personally it’s harder and harder to accept the trade offs of an internet without id. AI has only accelerated this, I don’t want to live in a world where the average person unknowingly interacts with bots more than other individuals and where black market actors can sway public opinion with armies of bots.
I think most people are aligned here, and that an internet without identification is inevitable whether we like it or not.
Identification fixes nothing here, you log with your account, plug in the AI.
The problems with social media have nothing to do with ID and everything to do with godawful incentives, the argument seems to be that it's a large price to pay but that it's worth it. Worth it for what? The end result is absolutely terrible either way
1) ID checks will not close the trade off. Real IDs are easily available on the market. Thus criminals will used them no problem. It's the law-abiding privacy-minded people (like me) who would be hurt the most.
2) Your point is valid. I too want to know whether I am engaging with a bot or a person. This is impossible now and it will be impossible once ID check becomes ubiquitous.
3) I will be happy to see (or not) a blue checkmark by the profile name. Just like in Twitter. That's enough.
100% correct. At this point the harms to children from social media use are very well documented.
Like everything else in society, there are tradeoffs here, I'm much more concerned with the damage done to children's developing brains than I am to violations of data privacy, so I'm okay with age verification, however draconian it may be.
> At this point the harms to children from social media use are very well documented
Our middle child (aged 12) has an Android phone, but it has Family Link on it.
Nominally he gets 60 mins of phone time per day, but he rarely even comes close to that, according to Family Link he used it for a total of 17 minutes yesterday. One comes to the conclusion that with no social media apps, the phone just isn't that attractive.
He seems to spend most of his spare time reading or playing sports...
We need to destroy privacy and anonymity online for the noble goal of the government banning teenagers from looking at Twitter and Instagram?
If it's a concern, parents can prevent or limit their children's use. If all this were being done to prevent consistent successful terrorist attacks in the US with tens of thousands of annual casualties, I'd say okay maybe there is an unavoidable trade-off that must be made here, but this is so absurd.
>I don’t want to live in a world where the average person unknowingly interacts with bots more than other individuals and where black market actors can sway public opinion with armies of bots.
That is not the argument for identification on many places on the internet. It's not even the argument that the gov reps pushing it typically make.
And why would it be. The companies that go along with all this don't want to get rid of all bots and public opinion campaigns. They make money off of many of those.
You're not thinking more than one step ahead. If you let a third party define who "has ID", "is human", etc. you give that third party control over you. You already gave control of your attention away to the sites who host the UGC, now you also give away control of your sense of reality.
At any point they can tell a real human what they can and can't say, and if they go against their masters, their "real human" status is revoked, because you trust the platform and not the person.
If we want to go full conspiritard, we could accuse those of wanting to control speech to be the financial backers of those flooding social media with AI slop: https://www.youtube.com/watch?v=-gGLvg0n-uY -- this fictional video thematically marries Metal Gear Solid 2's plot with current events: "perfect AI speech, audio and video synthesis will drown out reality [...] That is when we will present our solution: mandatory digital identity verification for all humans at all times"
I am though. In the world I live in I already have to give power over myself to corporations and a government, I don’t buy this as an argument for continuing to let internet companies skirt existing laws.
Why don't we have PKI built in to our birth certificates and drivers licenses? Why hasn't a group of engineers and experts formed a consortium to try and solve this problem in the least draconian and most privacy friendly way possible?
ID verification doesn't protect against that. Why not? Because there are a lot of people that will trade their ID for a small amount of money, or log someone/something in. IDs are for sale, like everyone who was ever a high school student knows for "some" reason.
Plus what you're asking would require international id verification for everyone, which would first mostly make those IDs a lot cheaper. But there's a second negative effect. The organizations issuing those IDs, governments, are the ones making the bot armies. Just try to discuss anything about Russia, or how bad some specific decision of the Chinese CCP is. Or, if you're so inclined: think about how having this in the US would mean Trump would be authorizing bot armies.
This exists within China, by the way, and I guarantee you: it did not result in honest online discussion about goods, services or politics. Anonymity is required.
I am so surprised by the comments on this thread. I was not expecting to see so many people on Hacker News in favor of this. As is typically the case with things like this, the reasoning stems from agreeing with the goal of age verification, with little regard to whether age verification could ever actually work. It reminds me in some sense to the situation with encryption where politicians want encryption that blocks "the bad guys" while still allowing "the good guys" to sneak in if necessary. Sure, that sounds cool, it's not possible though. I suppose DRM is a better analogue here, an increasingly convoluted system that slowly takes over your entire machine just so it can pretend that you can't view video while you're viewing it.
To be clear, tackling the issue of child access to the internet is a valuable goal. Unfortunately, "well what if there was a magic amulet that held the truth of the user's age and we could talk to it" is not a worthwhile path to explore. Just off the top of my head:
1. In an age of data leaks, identity theft, and phishing, we are training users to constantly present their ID, and critically for things as low stakes as facebook. It would be one thing if we were training people to show their ID JUST for filing taxes online or something (still not great, but at least conveys the sensitivity of the information they are releasing), but no, we are saying that the "correct future" is handing this information out for Farmville (and we can expect its requirement to expand over time of course). It doesn't matter if it happens at the OS level or the web page level -- they are identical as far as phishing is concerned. You spoof the UI that the OS would bring up to scan your face or ID or whatever, and everyone is trained to just grant the information, just like we're all used to just hitting "OK" and don't bother reading dialogs anymore.
2. This is a mess for the ~1 billion people on earth that don't have a government ID. This is a huge setback to populations we should be trying to get online. Now all of a sudden your usage of the internet is dependent on your country having an advanced enough system of government ID? Seems like a great way for tech companies to gain leverage over smaller third world companies by controlling their access to the internet to implementing support for their government documents. Also seems like a great way to lock open source out of serious operating system development if it now requires relationships with all the countries in the world. If you think this is "just" a problem of getting IDs into everyone's hands, remember that it a common practice to take foreign worker's passports and IDs away from them in order to hold them effectively hostage. The internet was previously a powerful outlet for working around this, and would now instead assist this practice.
3. Short of implementing HDCP-style hardware attestation (which more or less locks in the current players indefinitely), this will be trivially circumvented by the parties you're attempting to help, much like DRM was.
Again, the issues that these systems are attempting to address are valid, I am not saying otherwise. These issues are also hard. The temptation to just have an oracle gate-checker is tempting, I know. But we've seen time and again that this just (at best) creates a lot of work and doesn't actually solve the problem. Look no further than cookie banners -- nothing has changed from a data collection perspective, it's just created a "cookie banner expert" industry and possibly made users more indifferent to data collection as a knee-jerk reaction to the UX decay banners have created on the internet as a whole. Let's not 10 years from now laugh about how any sufficiently motivated teenager can scan their parent's phone while they're asleep, or pay some deadbeat 18 year-old to use their ID, and bypass any verification system, while simulateneously furthering the stranglehold large corporations have over the internet.
Innovation doesn't need a big brother looking over your shoulder all the time. We have enough tracking tech on the web, no need to make it official as well. So yes, the "will" is missing, especially with the many misbehaviour states have shown towards privacy and surveillance.
A little legislative change and you can kiss your zero-proof goodbye if any infrastructure is established. This is about making intelligent decisions in your life. Your suggestion is far from innovative.
We will see real innovation in mechanisms to sideline age verification.
I want to sincerely ask whether you read my post, because your response is so unrelated I believe you might accidentally be responding to another post? If so, please ignore the rest, which is only intended in the case where you are actually responding to what I wrote.
Your system seems to address none of the issues I listed. For example, I argue that one difficulty is in the fact that these systems would be highly phishable -- a property that is present in your described "easy" solution. Your system trains users to become accustomed to being pestered by pop up windows that ask to see their ID and use their camera. Congrats, I can now trivially make a pop up a window that looks like this UI and use it to steal your info, as the user will just respond on auto-drive, as we have repeatedly shown both in user studies and in our own lived experiences. I also explained how a system like this would assist in the practice of trapping migrant workers by confiscating their government credentials [1]. This is a huge problem today in Asia, and one of the few outlets captive workers can use to escape this control is the internet -- a "loophole" your system would dutifully close for these corporations.
I am happy to have a discussion about this -- it's how we come up with new solutions! But that requires reading and responding to the concerns I brought up, not assuming that my issue is that I can't imagine implementing a glorified OAuth login flow.
Every age verification scheme is really an identity verification scheme, "age" is just the acceptable entry point. Once the infrastructure exists to verify you are 18, it can verify you are not on a watchlist, verify your creditworthiness, verify your political associations.
You are not building a parental filter. You are building rails.
"Protect the children" is the canonical playbook for every surveillance expansion since forever. The children get protected for six months. The infrastructure stays forever.
I wonder how much time we have before being asked to enter the government issued ID in a card reader so websites can read age and biometric data from the chip.
It’s the same as scanning for CSAM, or encryption-backdoors “to catch criminals”.
Of course we hate child abuse.
Of course we hate criminals.
Of course we hate social media addicting our kids.
But they’re just used as emotional framing for the true underlying desire: government surveillance.
(For the record: I am not into conspiracy theories; the EU has seen proposals for - imho technically impossible - “legally-breakable encryption” alone in 2020, 2022, and 2025; now we”ll also see repeated attempts at the “age verification” thing to force all adults to upload their IDs to ‘secure’ web portals)
It's amazing how much it's possible to foment arguments against something if you are very well funded and a regulation will cost your industry a lot of money.
Age verification is a good thing. Giving children unrestricted access to hardcore pornography is bad for them. Whatever arguments you want to make, fundamentally this is true.
Age verification is fundamentally harmful and is an attack on user privacy. Age verification is being heavily lobbied for by tech companies that are hoping to get rich off of violating your privacy.
Anonymous age verification is fundamentally impossible. It is especially a bad idea for adult content, as a person's perfectly legal sexual beliefs and fantasies can permanently destroy their lives if that information got out. Parental controls are the only ethical, secure, and privacy protecting way forward here.
Most of them probably don't even have kids of their own, they just hate social media for exposing children to conservative ideas and want to ban it to prevent that.
I think this should work like OpenID connect but with just a true/false.
PS = pr0n site
AV = age verification site (conforming to age-1 spec and certified)
PS: Send user to AV with generated token
AV: Browser arrives with POST data from PS with generated token
AV: AV specific flow to verify age - may capturing images/token in a database. May be instant or take days
AV: Confirms age, provides link back to original PS
PS: Requests AV/status response payload:
{
"age": 21,
"status": "final"
}
No other details need to be disclosed to PS.
I don't know if this is already the flow, but I suspect AV is sending name, address, etc... All stuff that isn't needed if AV is a certified vendor.
All of my kids devices are identified, at device level, as children's devices. They could've trivially exposed this as metadata to allow sites to enforce "no under 18" use. However, I'd disagree that my bigger concern for my kids isn't that they'd see a boob or a penis, but that they'd see an influencer who'd try to radicalize them to some extremist cause, and that's usually not considered 18+ content.
And either way, none of that requires de-anonymizing literally everyone on the internet. I'd be more than happy to see governments provide cryptographically secure digital ID and so that sites can self-select to start requiring this digital ID to make moderation easier.
The problem with that is that sites/apps will retain the identifier, either to use the Digital ID for login (not just one-time age verification), because they want to retain as much information as possible for later usage or sale, or because a government told them they have to retain it so all their social media activity can be easily linked to them.
I have been a kid and now a parent. It is impossible with the tools available to proof kids from the internet. If it's not a parent it's at a friends house, school devices, or a dedicated sense of curiosity.
Two things tech companies want to protect:
The perception of anonymity
Who gets to collect that information
I agree that a smaller loop of people should have that data but the loop is growing every day.
So if it ruins the perception anonymity for young naieve users so be it.
I'm not saying it's impossible to be somewhat anon, I'm just saying untrained users should understand the environment they're interacting with before they get hooked on useless products.
My main takeaway from this is that politicians seem to have given up on making "social media" less harmful by regulating it, and instead focus on gatekeeping access, with the added perk of supplying security services and ad tyrants with yet another data pump.
Imagine an OIDC type solution but for parents might work here.
Basically, kids can sign up for an account triggering a notification to parents. The parent either approves or rejects the sign in. Parents can revoke on demand. See kids login usage to various apps/services. Gets parental restrictions in the login flow without making it a PITA.
Imagine if regular TV with age warnings in the beginning of programs made you fax your ID to the channel headquarters before you could watch PG13+ or whatever movie. This is obviously a privacy nightmare and a suboptimal solution.
A good solution that respects privacy and helps reduce the exposure to harmful content at a young age is not very obvious though (but common sense and parental guidance seems to be the first step)
ZKP is integrated in Google Wallet and has been running in production for a few months. We (Google) released the ZKP library as open source last year (this is the library used in production).
Afterwards, folks from ISRG produced an independent implementation
https://github.com/abetterinternet/zk-cred-longfellow with our blessing and occasional help. I don't know if the authors would call it "production ready" yet, but it is at least code complete, fast enough, and interoperable with ours.
I feel like the ending undermines the whole piece. Throwing your hands up and going "we should do nothing" isn't really a solution. If a compromise exists I think it's adding age requests on device setup. There wouldn't be any verification but it could be used as a way to limit access to content globally. Content provides would just need a simple API to check if the age range fits and move right along.
This puts more onus on parents and guardians to ensure their child's devices are set up correctly. The system wouldn't be perfect and people using something like Gentoo would be able to work around it, but I think it helps address the concerns. A framework would need to be created for content providers to enforce their own rating system but I don't think it's an impossible task. It obviously wouldn't cover someone not rating content operating out of Romania, but should be part of the accepted risk on an open internet.
Personally I do agree with the "do nothing" stance, but I don't think it's going to hold up among the wider public. The die is cast and far too many average people are supporting moves like this. So the first defense should be to steer that conversation in a better way instead of stonewalling.
> Personally I do agree with the "do nothing" stance, but I don't think it's going to hold up among the wider public. The die is cast and far too many average people are supporting moves like this. So the first defense should be to steer that conversation in a better way instead of stonewalling.
I agree with this, and I find it frustrating how many people refuse to see this. It seems a lot of people would rather be "right" than compromise and keep the world closer to their stated values.
Big Tech refused to work together to implement a age flag that parents would setup on the children device, now we get each European and each USA state with their own special rules.
I can understand the need to restrict some stuff kids can see, like when I was a teen it me hours and hours to download one 2 minute porn clip from kazaa, but these days you could download a lifetime worth in one weekend. That can't be healthy.
That being said nothing about these laws is about protecting children; their primary purpose is to crack down on the next Just Stop Oil or Palestine Action so for that reason should be opposed.
> their primary purpose is to crack down on the next Just Stop Oil or Palestine Action so for that reason should be opposed.
Could you explain to me how a digital id standard involving a mechanism for zero knowledge proof of identity is supposed to help the government crack down on activists (skipping the fact you are using UK examples for a EU spec)?
Take into account the reality we live in where every communication platform already requires you to provide your phone number and you can't get a phone number without providing an ID the European Union please, not some disconnected threat model.
> ...but these days you could download a lifetime worth in one weekend.
Uh. I could check in the back of my parents' closet (hidden under some fabric) for at least a decade's worth of dirty magazines. It's true that that's less than a lifetime's worth of pictures and articles, but I'd say that that's effectively equivalent.
> That can't be healthy.
The only thing that's unhealthy is not being able to talk frankly and honestly about sex and sexuality with your peers, parents, and other important adults in your life. Well, that and never being told that sex leads to pregnancy, or how to recognize common STDs... but you're likely to get that "for free" if you're able to talk frankly and honestly about sex and sexuality.
I'm happy to see the IEEE talking about this, and to see the topic getting attention in the technical press. The problem is, I think (for the most part) that technical people already Get It. Who we need to convince are average, non-technical voters who don't know, don't understand, think it's a good thing, or would happily jump feet-first into a wood chipper if someone told them it would protect the children.
I also suspect that social media has damaging effects on kids, and they probably shouldn't have access to it, but not like this. I'd probably be quicker to support something like saying that individuals <18 aren't allowed to buy or possess a phone or tablet that has access to an app store or web browser, and only offers voice- and text-based communications channels. Ok, so now it all happens on a laptop? What's "a tablet?" Is a Chromebook a tablet? It's fucking impossible.
this is a broader parenting problem, the state doesn't need to do this
politicians are interested in it because they're begging for some way to censor the internet, which would actually be even worse for parenting because now it prevents children from ever learning to be responsible with these highly addictive platforms
Mandatory age check is not going to reduce the number of criminals online. Period.
We should focus on teaching parents how to educate their children properly, and teach children how to safely browse the internet and how to avoid common scams and pitfalls.
I played Roblox when I was a teenager and all the time my aunt told me to be careful of who I talked to online, as they could be a pedo. Even though there wasn't a constant monitoring from my parents or family, her words were repeated many times that I actually thought 5 times before sharing any kind of personal information online, back then.
Helping parents would be useful rather than telling them to "just do it".
The pain in trying to set up fortnite and minecraft online as parents is unsummounted, involving creating half a dozen accounts with different companies just to get some form of control. Far easier to just give them an adult account.
The process to create a child account should be seamless and no harder than creating an adult account.
Age verification is not more difficult than a payment system.
I mean, if I can pay on a website without the website to know my credit card number, I should be able to prove my age without the website to know anything about me either.
France has a ID verification system for all its service. You’d think they should be able to provide a hook that lets people prove they’re over the age limit to any third party without the third party knowing. It seems fairly basic.
There is a solution to this.
There are privacy issues on the internet, but I think this ain’t one.
It is a valid alternative avenue towards a legal implementation of "child safeguarding" IMO. Someone pays for the internet, that person is responsible for what minors do on their connection. If they have trouble doing that we can use normal societal mechanisms like idk social services, education, and government messaging.
This is the way it works with e.g. alcohol and cigarettes, most places. Famously kids can just get a beer from a random fridge and chug it, but someone 16/18/21+ will be responsible and everyone seems mostly fine with this.
This is the answer. If you provide internet access to someone, you're responsible for it. It's a generally established law from a Torrenting PoV, so isn't it equally applicable to downloading content unsuitable for children. Sure it'll destroy offering free wifi, but that always was tricky from a legal PoV around responsibilities.
Ideally the law would require websites (and apps) to provide some signed age requirement token to the client (plus possibly classification) instead of the reverse. Similarly OS and web clients should be required to provide locked down modes where the maxium age and/or classification could be selected. As a parent I would the be able to setup my child device however I wish without loss of privacy.
Is it bypassable by a sufficiently determined child? Yes, but so it is the current age verification nonsense.
I have a problem with an open internet and allowing open access to everything the internet can offer to young children.
It cannot be a friction-less experience. Allowing children to see gore and extreme porn at a young age is not healthy. And then we have all the "trading" platforms (gambling).
Even though my brothers were able to get many hard drugs when I was young, around 1977, there was a lot of friction. Finding a dealer, trusting them, etc. Some bars would not card us but even then there was risk and sometimes they got caught. In NY we could buy cigarettes, no friction, and the one drug I took when I was young, addicted to them at 16, finally quitting for good at 20. I could have used some friction there.
So how do we create friction? Maybe hold the parents liable? They are doing this with guns right now, big trial is just finishing and it looks like a father who gave his kid an ak47 at 13 is about to go to jail.
I would like to see a state ID program when the ID is just verified by the State ID system. This way nothing needs to be sent to any private party. Sites like Discord could just get a OK signal from the state system. They could use facial recognition on the phone that would match it with the ID.
Something needs to be done however. I disagree that the internet needs to be open to all at any age. You do not need an ID to walk into a library, but you need one to get into a strip club. I do not see why that should not be the same on the internet.
> The only way to prove that someone is old enough to use a site is to collect personal data about who they are. And the only way to prove that you checked is to keep the data indefinitely.
Well isn't this premise false from the get go? Many countries (not the US sure, but others) have digitised ID. Services can request info from the ID provider; in this case social media websites would simply request a bool isOver16, literally one bit of information, to grant access. No other information needs to be leaked, and no need for idiotic setups like sending photos of your passport to god knows what website (or god knows what external vendor that website uses for ID verification).
Seems silly to worry about this when social media itself is predicated on collecting gigabytes of data about you daily.
Again, this is not about half assed solutions that force you to send photos of your passport to websites. That's a terrible idea for the reasons discussed in the article. But it's obviously false that this is the only way.
Corporate interests don’t care about data privacy or security they care about liability and compliance which are not the same thing.
Major banks and government institutions can’t even be bothered to implement the NIST password guidelines. If they got their gdpr soc2 fedramp whatever it’s green lights and the rest is insurance.
The point is to undermine data protection; this debate is useless. It's a question about power and control, not a technical one. The people lobbying for this don't care about children, and neither are they getting big support from a constituency clamoring for this. This is an intelligence initiative, and a donor initiative from people who are in a position to control the platform (all computing and communications) after it is locked down.
It's not even worth talking about online. There's too much inorganic support for the objectives of nation-states and the corporations that own them.
Legislation has been advanced in Colorado demanding that all OSes verify the user's age. It will fail, but it will be repeated 100 times, in different places, smuggled attached to different legislation, the process and PR strategies refined and experimented with, versions of it passed in Australia, South Korea, maybe the UK and Europe, and eventually passed here. That means that "general purpose" computing will be eventually be lost to locked bootloaders.
And it will be an entirely engineered and conscious process by people who have names. And we will babble about it endlessly online, pretending that we have some control over it, pretending that this is a technical discussion or a moral discussion, on platforms that they control, that they allow us to babble on as an escape valve. Then, one day the switch will flip, and advocacy of open bootloaders, or trading in computers that can install unattested OSes, will be treated as organized crime.
All I can beg you to do is imagine how ashamed you'll be in the future when you're lying about having supported this now, or complaining that you shouldn't have "trusted them to do it the right way." Don't let dumb fairytales about Russians, Chinese, Cambridge Analytics and pedophile pornography epidemics have you fighting for your own domination. Maybe you'll be the piece of straw that slows things down just enough that current Western oligarchies collapse before they can finish. Maybe we'll get lucky.
Polls and ballots show that none of this stuff has majority organic support. But polls can be manipulated, and good polls have to be publicized for people to know they're not alone, and not afraid they're misunderstanding something. If both candidates on the ballot are subverted, the question never ends up on the ballot.
The article itself says nothing that hasn't been said before, and stays firmly under the premise that access to content online by under-18s is suddenly one of the most critical problems of our age, rather than a sad annoyance. What is gained by having this dumb discussion again?
It's crazy to me that we want to force age verification on every service across the Internet before we ban phones in school. I could understand being in favor of both, or neither, but implementing the policy that impacts everybody's privacy before the one that specifically applies within government-run institutions is just so disappointingly backwards it's tempting to consider conspiracy-like explanations.
The advantage, I think, of age verification by private companies over cellphone bans in public schools is that cellphone bans appear as a line-item on the government balance sheet, whereas the costs of age verification are diffuse and difficult to calculate. It's actually quite common for governments to prefer imposing costs in ways that make it easier for the legislators to throw up their hands and whistle innocently about why everything just got more expensive and difficult.
And the argument over age verification for merely viewing websites, which is technically difficult and invasive, muddles the waters over the question of age verification for social media profiles, where underage users are more likely to get caught and banned by simple observation. The latter system has already existed for decades -- I remember kids getting banned for admitting they were under 13 on videogame forums in the '00s all the time. It seems like technology has caused people to believe that the law has to be perfectly enforceable in order to be any good, but that isn't historically how the law has worked -- it is possible for most crimes to go unsolved and yet most criminals get caught. If we are going to preserve individual privacy and due process, we need to be willing to design imperfect systems.
As a parent, I'm happy that social bans are finally a thing.
But, I don't get the approach. It's not like social media starts being a positive in our life at 20. The way these companies do social media is harmful to mental health at every age. This is solving the wrong problem.
The solution is to take away their levers to make the system so addictive. A nice space to keep in touch with your friends. Nothing wrong with that.
I'm going to state that at one point I was one of the young people this kind of legislation is meaning to protect. I was exposed to pornography at too young an age and it became my only coping mechanism to the point where as an adult it cost me multiple jobs and at one point my love life.
I don't think this legislation would have helped me. I found the material I did outside of social media and Facebook was not yet ubiquitous. I did not have a smartphone at the time, only a PC. I stayed off social media entirely in college. Even with nobody at all in my social sphere, it was still addicting. There are too many sites out there that won't comply and I was too technically savvy to not attempt to bypass any guardrails.
The issue in my case was not one of "watching this material hurt me" in and of itself. It was having nobody to talk to about the issues causing my addiction. My parents were conservative and narcissistic and did not respect my privacy so I never talked about my addiction to them. They already punished me severely for mundane things and I did not want to be willingly subjected to more. To this day they don't realize what happened to me. The unending mental abuse caused me to turn back to pornography over and over. And I carried a level of shame and disgust so I never felt comfortable disclosing my addiction to any school counselors or therapists for decades. The stigma around sexual issues preventing people from talking about them has only grown worse in the ensuing years, unfortunately.
At most this kind of policy will force teenagers off platforms like Discord which might help with being matched with strangers, but there are still other avenues for this. You cannot prevent children from viewing porn online. You cannot lock down the entire Internet. You can only be honest with your children and not blame or reproach them for the issues they have to deal with like mine did.
In my opinion, given that my parents were fundamentally unsafe people to talk to, causing me to think that all people were unsafe, then the issue of pornography exposure became an issue. In my case, I do not believe there was any hope for me that additional legislation or restrictions could provide, outside of waking up to my abuse and my sex addiction as an adult decades later. Simply put, I was put into an impossible situation, I didn't have any way to deal with it as a child, and I was ultimately forsaken. In life, things like those just happen sometimes. All I can say was that those who forsook me were not the platforms, not the politicians, but the people who I needed to trust the most.
I believe many parents who need to think about this issue simply won't. The debate we're having here on this tech-focused site is going to pass by them unnoticed. They're not going to seriously consider these issues and the status quo will continue. They won't talk with their children to see if everything's okay. I don't have many suggestions to offer except "find your best family," even if they aren't blood related.
"In cases when regulators demand real enforcement rather than symbolic rules, platforms run into a basic technical problem. The only way to prove that someone is old enough to use a site is to collect personal data about who they are."
These so-called "platforms" already collect data about who people are in order to facilitate online advertising and whatever else the "platform" may choose to do with it. There is no way for the user to control where that data may end up or how it may be used. The third party can use the data for any purpose and share it with anyone (or not). Whether they claim they do or don't do something with the data is besides the point, their internal actions cannot be verified and there are no enforceable restrictions in the event a user discovers what they are doing and wants to stop them (at that point it may be too late for the user anyway)
"Tech" journalists and "tech bros" routinely claim these "platforms" know more about people than their own families, friends and colleagues
That's not "privacy"
Let's be honest. No one is achieving or maintaining internet "privacy" by using these "platforms", third party intermediaries (middlemen) with a surveillance "business model", in order to communicate over the internet
On the contrary, internet "privacy" has been diminishing with each passing year that people continue to use them
The so-called "platforms" have led to vast repositories of data about people that are used every day by entities who would otherwise not be legally authorised or technically capable of gathering such surveillance data. Most "platform" users are totally unaware of the possibilities. The prospect of "age verification" may be the wake up call
"Age verification" could potentially make these "platforms" suck to a point that people might stop using them. For example, it might be impossible to implement without setting off users' alarm bells. In effect, it might raise more awareness of how the vast quantity of data about people these unregulated/underregulated third parties collect "under the radar" could be shared with or used by other entities. Collecting ID is above the radar and may force people to think twice
The "platforms" don't care about "privacy" except to control it. Their "business model" relies on defeating "privacy", reshaping the notion into one where privacy from the "platform" does not exist
Internet "privacy" and mass data collection about people via "platforms" are not compatible goals
"... our founders displayed a fondness for hyperbolic vilification of those who disagreed with them. In almost every meeting, they would unleash a one-word imprecation to sum up any and all who stood in the way of their master plans.
"Bastards!" Larry would exclaim when a blogger raised concerns about user privacy."
- Douglas Edwards, Google employee number 59, from 2011 book "I'm feeling lucky"
If a user decides to stop using a third party "platform" intermediary (middleman) that engages in data collection, surveillance and ad services, for example, because they wish to avoid "age verification", then this could be the first step toward meaningful improvements in "internet privacy". People might stop creating "accounts", "signing in" and continuing to be complacent toward the surreptititious collection of data that is subsequently associated with their identity to create "profiles"
>"None of this is an argument against protecting children online. It is an argument against pretending there is no tradeoff"
Tradeoff acknowledged, and this runs both sides, there's hundreds of risks that these policies are addressing.
To mention a specific one, I was exposed to pornography online at age 9 which is obviously an issue, the incumbent system allowed this to happen and will continue to do so. So to what tradeoffs in policy do detractors of age verification think are so terrible that it's more important than avoiding, for example, allowing kids first sexual experiences to be pornography. Dystopian vibes? Is that equivalent?
Or, what alternative solutions are counter-proposed to avoid these issues without age verification and vpn bans.
Note 2 things before responding:
1)per the original quote, it is not valid to ignore the trade offs with arguments like "child abuse is an excuse to install civilian control by governments"
2) this was not your initiave, another group is the one making huge efforts to intervene and change the status quo, so whatever solution is counterproposed needs to be new, otherwise, as an existing solution, it was therefore ineffective.
If any of those is your argument, you are not part of the conversation, you have failed to act as wardens of the internet, and whatever systems you control will be slowly removed from you by authorities and technical professionals that follow the regulations. Whatever crumbs you are left as an admin, will be relegated to increasingly niche crypto communities where you will be pooled with dissidents and criminals of types you will need to either ignore or pretend are ok. You will create a new Tor, a Gab, a Conservapedia, a HackerForums, and you will be hunted by the obvious and inequivocal right side of the law. Your enemy list will grow bigger and bigger, the State? Money? The law? God? The notion of right and wrong which is like totally subjective anyways?
> I was exposed to pornography online at age 9....allowing kids first sexual experiences to be pornography
I was initially exposed to pornography at 8 years old, by finding a disgarded magazine in a hedge. However this was pretty soft.
I was exposed to serious pornography at 10 years by finding a hidden VHS tape in the back of a drawer at a friends house and getting curious. This was hardcore German stuff with explicit violence. This has caused me to have therapy in my lifetime.
This was all in the 80s by the way.
Therefore anything you are mentioning happened long before the internet, and is totally possible in a completely offline world as well. So how do these new digital laws 'protect children' again?
We should just ban smartphones, it's where a great deal of the harm comes from and is harder for parents to manage. No need for children to have cameras connected to the internet whether via smartphones or computers.
I know many will disagree and that is ok. Imo we need global id based on nation states national id. I know that the US doesn't have that, but the rest of the developed world do. I don't want id on porn sites because I don't think that is necessary, but I want bot-free social media, 13+ sharing forums like reddit and I want competitive games where if you are banned you need your brothers id to try cheating again.
> And the only way to prove that you checked is to keep the data indefinitely.
This is not true and made me immediately stop reading. If a social media app uses a third party vendor to do facial/ID age estimation, the vendor can (and in many cases does) only send an estimated age range back to the caller. Some of the more privacy invasive KYC vendors like Persona persist and optionally pass back entire government IDs, but there are other age verifiers (k-ID, PRIVO, among others) who don't. Regulators are happy with apps using these less invasive ones and making a best effort based on an estimated age, and that doesn't require storing any additional PII. We really need to deconflate age verification from KYC to have productive conversations about this stuff. You can do one thing without doing the other.
If you don't keep and cross-reference documents it is really easy to circumvent, e.g. by kids asking their older siblings to sign them up.
I don't think a bulletproof age verification system can be implemented on the server side without serious privacy implications. It would be quite easy to build it on the client side (child mode) but the ones pushing for these systems (usually politicians) don't seem to care about that.
Yep, it is easy to circumvent, and the silver lining of all of this is that regulators don't care. They care that these companies made an effort in guessing.
I work at a European identity wallet system that uses a zero knowledge proof age identification system. It derives an age attribute such as "over 18" from a passport or ID, without disclosing any other information such as the date of birth. As long as you trust the government that gave out the ID, you can trust the attribute, and anonymously verify somebodies age.
I think there are many pros and cons to be said about age verification, but I think this method solves most problems this article supposes, if it is combined with other common practices in the EU such as deleting inactive accounts and such. These limitations are real, but tractable. IDs can be issued to younger teenagers, wallet infrastructure matures over time, and countries without strong identity systems primarily undermine their own age bans. Jurisdictions that accept facial estimation as sufficient verification are not taking enforcement seriously in the first place. The trap described in this article is a product of the current paradigm, not an inevitability.
According to the EU Identity Wallet's documentation, the EU's planned system requires highly invasive age verification to obtain 30 single use, easily trackable tokens that expire after 3 months. It also bans jailbreaking/rooting your device, and requires GooglePlay Services/IOS equivalent be installed to "prevent tampering". You have to blindly trust that the tokens will not be tracked, which is a total no-go for privacy.
These massive privacy issues have all been raised on their Github, and the team behind the wallet have been ignoring them.
Regulatory capture at its finest. Such a ruling gives Apple and Google a duopoly over the market.
Maybe worse, it encourages the push of personal computers to be more mobile like (the fact that we treat phones as different from computers is already a silly concept).
So when are we going to build a new internet? Anyone playing around with things like Reticulum? LoRA? Mesh networks?
16 replies →
> EU's planned system requires highly invasive age verification
EUDI wallets are connected to your government issued ID. There is no "highly invasive age verification".
We are literally sending a request to our government's server to sign, with their private key, message "this john smith born on 1970-01-01 is aged over 18" + jwt iat. There are 3 claims in there. They are hashed with different salts. This all is signed by the government.
You get it with the salts. When you want to prove you are 18+ you include salt for the "is aged over 18" claim, and the signed document with all the salts and the other side can validate if the document is signed and if your claim matches the document.
No face scanning, no driver license uploading to god-knows-where, no anything.
> to obtain 30 single use, easily trackable tokens that expire after 3 months
This is the fallback mechanism. You are supposed to use bbs+ signatures that are zero knowledge, are computed on the device and so on. It is supposed to provide the "unlinkability". I don't feel competent enough to explain how those work.
> jailbreaking / "prevent tampering"
This is true. The eidas directive requires that secret material lives in a dedicated hardware / secure element. It's really not much different than what a banking app would require.
> You have to blindly trust that the tokens will not be tracked
This is not true, the law requires core apps to be opensource. Polish EUDI wallet has been even decompiled by a youtuber to compare it with sources and check if the rumors about spying are true. So you can check yourself if the app tracks you.
Also we can't have a meaningful discussion without expanding on definition of "tracking".
Can the site owner track you when you verify if you are 18+? Not really, each token is unique, there should be no correlation here.
Can the government track you? No, not alone.
Can the site owner and the government collude to track you? Yes they can! Government can track all salts for your tokens, site can collect all salts, they can compare notes. There are so called policy mitigations currently: audits and requirements for governments to remove salts from memory the moment stuff is issued.
Can they lie? Sure.
Can the site owner and the government collude to track you if you are using bbs+? No. Math says no.
Can they lie if you are using bbs+? Math says no.
43 replies →
Thanks for posting this.
The inherent problem with all zero knowledge identity solutions is that they also prevent any of the safeguards that governments want for ID checking.
A true zero knowledge ID check with blind signatures wouldn't work because it would only take a single leaked ID for everyone to authenticate their accounts with the same leaked ID. So the providers start putting in restrictions and logging and other features that defeat the zero knowledge part that everyone thought they were getting.
13 replies →
> It also bans jailbreaking/rooting your device, and requires GooglePlay Services/IOS equivalent be installed to "prevent tampering".
The EUDI spec is tech neutral.
What the EUDI mandates is a high level of assurance under the eIDAS 2.0 regulation and the use of a secure element or a trusted execution environment to store the key.
Link?
1 reply →
I'm sorry to say it but the fact it bans jailbreaking/rooting your device really makes me believe "think of the children" isn't their real goal.
There's some clever kids out there but come on.
I feel like you're glossing over a lot of uncomfortable but important implementation details here. None of this works without effectively banning personal computing and tying the whole system to secure attestation (which in practice means non-jailbroken apple & android devices). No thanks.
Can we go back to defaulting to parenting instead of nanny-states? Maybe make "age sensitive" websites include this fact into a header (or whatever) so that parents can decide who in their household can access which content. Instead of having some overreaching corpo-government implementing draconian "verification" systems.
If I want to live under the thumb of a strongly verified "benevolent" dictatorship, I'll move to China. No need to create a second China at home.
> It derives an age attribute such as "over 18" from a passport or ID, without disclosing any other information such as the date of birth.
How? If it’s analyzes my ID 100% client side I can fake any info I want. If my ID goes to a server, it’s compromised IMO.
I think the zero proof systems being touted are like ephemeral messaging in Snapchat. That is, we’re being sold something that’s impossible and it only “works” because most people don’t understand enough to know it’s an embellishment of capabilities. The bad actors will abuse it.
Zero proof only works with some kind of attestation, maybe from the government, and there needs to be some amount of tracking or statistics or rate limiting to make sure everyone in a city isn’t sharing the same ID.
Some tracking turns into tracking everything, probably with an opaque system, and the justification that the “bad guys” can’t know how it works. We’ve seen it over and over with big tech. Accounts get banned or something breaks and you can’t get any info because you might be a bad guy.
Does your system work without sending my ID to a server and without relying on another party for attestation?
> If my ID goes to a server, it’s compromised IMO
Might be breaking news, but the state already has your passport ID in a server.
There's no dynamic analysis done, necessarily. In the Swiss design, fex, SD-JWTs are used for selective disclosure. For those, any information that you can disclose is pre-hashed and included in the signed credential. So `over_18: true` is provided as one of those hashes and I just show this to the verifier.
The verifier gets no other information than the strictly necessary (issuer, expiry, that kind of thing) and the over 18 bit, but can trust that it's from a real credential.
That's not strictly a zero knowledge proof based system, though, but it is prvacy-preserving.
3 replies →
Yes it does actually. You load your ID into your phone with the MRZ and NFC. The cryptographic proof inside your ID is used to verify that it was issued by an official government. So your ID is not being sent to a central server.
The reusing another ID is an issue. In some countries they will have a in person check to verify only you can load your ID into your phone. But then you still have the problem of sending a verification QR code to someone else and have them verify it. This might be solved by rolling time-gated QR codes and by making it illegal to verify someone else's verifications. But this is a valid concern and a problem that still needs solving.
Attestation from government sounds like the ideal solution. This could actually provide _more_ privacy because we can begin using attestation for things we currently use IDs for such as “Has the privilege of driving a car” or “Can purchase alcohol”
11 replies →
> If it’s analyzes my ID 100% client side I can fake any info I want. If my ID goes to a server,
amplifying your point, there is effectively no way for the layperson to make this distinction. And because the app needs to send data over an encrypted channel, it would be difficult at best for a sophisticated person to determine whether their info is being sent over the wire.
3 replies →
Immigrants do not have an ID for up to a few years when they move to Germany. Just this week the Berlin immigration office stopped issuing plastic residence cards for budget reasons, so people get a sticker in their passport.
Passport recognition is also spotty. The ID verification providers used by banks do not recognise Indian passports.
Will we exclude a few million people because it’s too expensive to verify that they are over 18?
Add this to “falsehoods programmers believe about ID verification”.
> Will we exclude a few million people because it’s too expensive to verify that they are over 18?
Yes. We absolutely will. KYC services is something that no one wants and everyone hates, thus there is no motivation to make it better. And if any, "better" might mean more invasive, because that means more data to mine and sell.
So, sure, excluding millions of people from KYC because it's cheaper to reject them than it is to study their documents - is the right decision business wise.
I am speaking as a person in the very same position.
In Austria you don't need an Austrian passport/Personalausweis for a Digital ID registration. Your original passport (or equivalent) in combination with a certificate of residence, student permit or similar is fine.
If the age verification is going to mandate government issued ID, the government issuer can be the Trust Anchor issuing a Digitally Signed Credential for the zero knowledge proof - using any available open source zero-knowledge process:
1) zkcreds-rs (zk-creds) [1]
2) zkLogin (Sui Foundation) [2]
3) TLSNotary [3]
4) DECO (Chainlink/Cornell) [4]
5) Anon-Aadhaar [5]
[1] https://github.com/rozbb/zkcreds-rs
[2] https://github.com/mystenlabs/sui/tree/main/sdk/zklogin
[3] https://github.com/tlsnotary/tlsn
[4] https://chain.link/education/zero-knowledge-proof-zkp#preser...
[5] https://github.com/anon-aadhaar/anon-aadhaar
Apologies that I'm latching onto your post for visibility, but for the sake of discussion - the European Identity Digital Wallet project specification and standardisation process is in the open and lives on github (yeah, the irony isn't lost on me :) ):
https://github.com/eu-digital-identity-wallet
https://eudi.dev/latest/
Everything's very much WIP, but it aims to provide a detailed Archictecture and Reference Framework/Technical Specifications and a reference implementation as a guideline for national implementations:
https://github.com/eu-digital-identity-wallet/eudi-doc-archi...
https://github.com/eu-digital-identity-wallet/eudi-wallet-re...
https://github.com/eu-digital-identity-wallet/eudi-doc-stand...
You'll find several (still evolving) Technical Specifications regarding ZKPs (including a discussion area) in the latter.
> Jurisdictions that accept facial estimation as sufficient verification are not taking enforcement seriously in the first place.
Or they want to spy on people.
In your system, can companies verify age offline, or do they need to send a token to the Government's authority to verify it (letting the Government identify and track users)?
Switzerland is working on a system that does the former, but if Government really wants to identify users, they can still ask the company to provide the age verification tokens they collected, since the Government hosts a centralized database that associates people with their issued tokens.
Aren't the companies also expected to do revocation checking, essentially creating a record of who identified where, with a fig leaf of "pseudonymity" (that is one database join away from being worthless)?
1 reply →
That assumes the companies store the individual tokens, as does the government. Neither of which are part of the design, but could be done if both sides desired it.
The Swiss design actually doesn't store the issued tokens centrally. It only stores a trust root centrally and then a verifier only checks the signature comes from that trust root (slightly simplified).
3 replies →
this is slightly better but not the hero we want or need. zeero knowledge proofs are improvement over uploading raw documents, trust is still an issue here. why should users have to authenticate with a government-backed identity wallet to access platforms to play games or access a website in the first place. we didnt have any of these guards in the 90s and early 2000s and everybody turned out just fine . in fact the average gen z is in a lot worse place than we used to be despite that we had complete raw algorithm supervision free access to the internet with far more disturbing content (remember ogrish and KaZaA)
The average person does not understand the math behind zero-knowledge proofs. They only see that state infrastructure is gatekeeping their web access. Furthermore, if the wallet relies on a centralized server for live revocation checks, the identity provider might still be able to log those authentication requests, effectively breaking anonymity at the state level.
On a practical level, this method verifies the presence of an authorized device rather than the actual human looking at the screen. Unless the wallet demands a live biometric scan for every single age check, they will simply bypass the system using a shared family computer or a parent's unlocked phone. We used to find our way around any sort of nanny software (remember net nanny)
what you are describing still remains a bubble and I really hope Americans aren't looking at EU for any sort of public policy directions here.
> we didnt have any of these guards in the 90s and early 2000s and everybody turned out just fine
One of the most highly valued tech companies of today makes a software that sometimes talks its user's into killing themselves. Some guy put "uwu notices bulge" on a bullet casing and shot Charlie Kirk: things turned out fine indeed.
1 reply →
I think there's a tradeoff triangle here, not dissimilar to Zooko's triangle or the CAP theorem, where the three aspects are age verification, privacy, and the freedom to run custom software on devices of your choosing.
You can have no system at all, which gives you freedom and privacy, but not age verification. You can have ID uploads, which give you age verification and freedom, but not privacy. You can have a ZKP-based system, which gives you age verification and privacy, but not freedom. This is because you need a way to prevent one unscrupulous ID owner from issuing millions of valid assertions for any interested user.
As soon as age-gated access depends on a government-issued credential, you're implicitly tying participation to state identity infrastructure
> As long as you trust the government that gave out the ID
I'm a citizen of a European Union member, I trust my government to issue me an ID and use said ID in my interactions with the state, I do not trust my state with anything more than that.
That is exactly the trust I mean. You need to trust that country X gives out valid IDs. If you have sketchy company Y giving out IDs to everyone, you probably would not trust any attributes derived from that ID. If you trust that a country gives out valid IDs, you can trust the information derived from that ID. You do not really need to trust your government any more than that for this system to work.
This part of trust was not about you trusting the government though so it is okay.
You have a much more trustworthy government than mine, sadly.
That's really awesome. I hope that soon we will also have humanity verification without sacrificing our anonymity.
With LLMs and paid actors wreaking havoc on social media I do think that social media needs pivot towards allowing only human users on it. I wrote about this here: https://blog.picheta.me/post/the-future-of-social-media-is-h...
Correct. A ZK Proof backed identity system is a significant bump up in both privacy and security to even what we have right now.
Everyone does realize we're being constantly tracked by telemetry, right?
A proper ZK economy would mitigate the vast majority of that tracking (by taking away any excuse for those in power to do so under the guise of "security") and create a market for truly-secure hardware devices, while still keeping the whole world at maximal security and about as close to theoretical optimum privacy as you're going to get. We could literally blanket the streets with cameras (as if they aren't already) and still have guarantees we're not being tracked or stored on any unless we violate explicit rules we pre-agree to and are enforceable by our lawyers. ZK makes explicit data custody rules the norm, rather than it all just flowing up to whatever behemoth silently owns us all.
Explain how the plastering of streets with cameras can be done in a privacy-preserving way?
In Amsterdam 1850 the municipality kept track of people's names, address, age, gender and religion (bevolkingsregister). It meant nothing at this time, but 90 years later the Nazi used these lists to murder jewish people going house by house. Thanks for the partisans setting this archive ablaze, life were saved.
I'm not saying it's right or wrong, you tell me, I just want to point at this random timeline.
I shudder when I think of how effective the Stasi would have been in the digital age. The only thing checking them was the labour demands of surveillance.
When Trump came into power a second time, and the ICE-nazification became apparent, I reached out to my government and asked them what they were doing to make it harder for "Trumpism" to happen here. No reply. Just crickets.
Hoovering up less data would be a really fucking good start. There's something about babies and bathwater, but by god this has proven to be very dangerous bathwater time and time again.
First let me clearly state that I appreciate the amount of thought you guys are putting into creating better systems that have high privacy guarantees. I concede to you, that in some situations, your system leads to better privacy.
But I don't look at this on a purely technological level. These identity-based systems are instruments of control. Right now everything is still in flux with how these tools will be used and how accessible they are to the general population and the many minorities therein. I simply don't trust our politicians to do the right thing short-term and long-term. The establishment of the GDPR has been a major victory for better privacy legislation and now the Commission wants to hollow it out. The Commission also wants chat control to increase the amount of mass surveillance in Europe.
There is a potential future, where we all win. But I am highly skeptical, that in the current political climate, we will end up there.
You mean that system that requires either to use an original unmodified Android phone, or a iOS phone and it does not work in absolutely anything else?
No it is open-source and portable to any platform you want. We currently support iOS and Android through Play store and F-droid, but that is just because most of the market is there at the moment.
This is true, but I think it's more that those jurisdictions don't actually care about something solving this securely so much as they want face scans for other purposes?
I have a few questions.
In that system does the age verification result come with some sort of ID linked to my government issued ID card? Say, if I delete my account on a platform after verifying and then create a new one, will the platform get the same ID in the second verification, allowing it to connect the two and track me? Or is this ID global, potentially allowing to track me through all platforms I verified my age on?
What a verification process looks like from the user perspective? Do I have to, as it happens now, pull out my phone, use it as a card reader (because I don't have a dedicated NFC device on my computer), enter the pin, and then I'll be verified on my computer so I can start browsing social media feed? Or, perhaps, you guys have come up with a simpler mechanism?
The wallet ecosystem is still really varied at the moment. Our implementation is unlinkable. So an issuer cannot track where you use the attribute. And a verifier cannot see that you've used the same attribute multiple times with their system. This is great for privacy and tracking protection, but not so great for other things. For example, people sending their QR codes to other people with the correct attribute (like maybe an underage person sending an 18+ check to an adult), is hard to solve for because they are unlinkable.
Most systems right now have you load data in your phone. Then when a check happens, you scan a QR code. You then get a screen on your phone saying X wants to know Y and Z about you, do you want to share this information? Then you just choose yes or no.
For your social media example. You would just get a QR code on your pc, then pull out your phone, scan and verify, then start browsing social media on your pc.
In the Swiss system, it depends on what they verified. If they required your full ID, that has a document number like a passport and they could track that.
If they did the right thing and only asked for the over 18 bit, then they wouldn't have a trackable identifier.
You are describing a situation where a pairwise pseudonymous identifier is generated. I don't think any real system does this with government IDs, but it might be possible.
This is really cool and I want it for inter-government identification. Eg country B can check a ZK proof that I'm a citizen of country A, allowed to drive, not a criminal, have a degree, etx
The requirement to use google or apple services is a deal breaker. If I can't verify my age using an EU wallet without having an account with a US tech company what is the point of any of this?
ZK proof can't solve the TOCTOU problem.
No one would be foolish enough to trust their government nor the EU. You should be ashamed of working for such "people". Thanks for helping implementing a surveillance state.
You don't have to trust your government to employ them, the key is to bake in and maintain rigorous checks and balances, demand transparency, routinely audit and fire people for corruption, etc.
Not only EU -- Digital ID on iPhone does this today, and is accepted by many USA airports for travel, etc., with rollout for DLs.
Huh?
I just don't want to have to ID myself at every corner of the internet. Whether the site receives my details or not.
I've heard they even want to mandate periodic re-checks now which is insane. The internet should remain free.
Besides, if parents don't want to give access to social media they can just not give their kids a phone, or just use the many parental control features available on it. Every phone has this these days.
And even if the government wants to ban this stuff for all kids (which I would not agree with but ok I don't have kids so I don't really care and parents do seem to want this), they don't have to enforce it this way. They can just make the parents liable if the kids are found to have access.
To me this is just another attempt at internet censorship and control.
Good luck finding the single government in the world that actually wants that, rather than it being a pretext for control that is too sweet to pass up. If you manage to find them, post an article on HN about it as top places to move to.
The system you're describing is good for the masses, not for those with power.
For me it is disqualified for usage because I need to buy into a Google or Apple ecosystem. At least the reference implementation does. This is just the next level of enshitification. And no, I don't need a digital blockwart at all.
And I have zero illusion privacy is compromised, it is trivial to identify devices these days, so it doesn't even work technically.
Next sentence we hear some empty bickering about digital sovereignty. This is all bullshit.
We support F-droid, so you could get a degoogled android version and use that to load the app on your phone. The app could also be ported to other platforms, but right now there is really no market for it.
1 reply →
Where can we learn more about your architecture?
Someone brought up the need for device attestation for trust purposes (to avoid token smuggling for example). That would surely defeat the purpose (and make things much much worse for freedom overall). If you have a solution that doesn't require device attestation, how does that solve the smuggling issue (are tokens time-gated, is there a limit to token generation, other things)?
We do not require an attestation and things like token smuggling is still a problem we need to solve. We have a system that prioritizes unlinkability. So an issuer cannot track the attribute they give you. And a verifier cannot link multiple disclosures with the same attribute. This privacy really helps things like token smuggling however. Time-gated tokens may increase the difficulty, but will probably not make it impossible. Making it illegal to verify someone else's qr codes could also help of course.
It's this I believe: https://www.w3.org/TR/vc-data-model-2.0/
3 replies →
[dead]
Yeah, but how to convince investors that trusting the government-issued ID is good enough? /s
> As long as you trust the government…
You should never trust the government
Age verification is very hard, because parents will give their children their unlocked account, and children will steal their parents' unlocked account. If that's criminalized (like alcohol), it will happen too often to prosecute (much more frequently than alcohol, which is rarely prosecuted anyways). I don't see a solution that isn't a fundamental culture shift.
If there's a fundamental culture shift, there's an easy way to prevent children from using the internet:
- Don't give them an unlocked device until they're adults
- "Locked" devices and accounts have a whitelist of data and websites verified by some organization to be age-appropriate (this may include sites that allow uploads and even subdomains, as long as they're checked on upload)
The only legal change necessary is to prevent selling unlocked devices without ID. Parents would take their devices from children and form locked software and whitelisting organizations.
I don't understand how this is any better.
It's my job as a parent (and I have several kids...) to monitor the things they consume and talk with them about it.
I don't want some blanket ban on content unless it's "age appropriate", because I don't approve that content being banned. (honestly - the idea of "age appropriate" is insulting in the first place)
Fuck man, I can even legally give my kids alcohol - I don't see why it's appropriate to enforce what content I allow them to see.
And I have absolutely all of the same tools you just discussed today. I can lock devices down just fine.
Age verification is a scam to increase corporate/governmental control. Period.
You should be able to choose what's age-appropriate for your kids. Giving them access to e.g. "PG-13" media when they're 9 isn't the problem. Giving mature kids unrestricted access isn't a problem. The problem is culture:
- Many parents don't think about restricting their kids' online exposure at all. And I think a larger issue than NSFW is the amount of time kids are spending: 5 hours according to this survey from 2 years ago https://www.apa.org/monitor/2024/04/teen-social-use-mental-h.... Educating parents may be all that is needed to fix this, since most parents care about their kids and restrict them in other ways like junk food
- Parents that want to restrict their kids struggle with ineffective parental controls: https://beasthacker.com/til/parental-controls-arent-for-pare.... Optional parental controls would fix this
3 replies →
> Fuck man, I can even legally give my kids alcohol - I don't see why it's appropriate to enforce what content I allow them to see.
In the USA it depends on the state. Federal guidelines for alcohol law does suggest exemptions for children drinking under the supervision of their parents, but that's not uniformly adopted. 19 states have no such exceptions, and in many of the remaining 31, restaurants may be banned from allowing alcohol consumption by minors even when their parents are there.
3 replies →
Your kids can’t buy alcohol though. If you want to unlock your phone and let your kids read smut then more power to you. Age gates do not and never will stop that. But I sure as hell don’t want companies selling porn to 5 yr olds.
The law is there to protect children in the case they have absent/neglectful parents. Unfortunately not every child has a parent as aware as you.
1 reply →
this seems to be an issue of being able to be a parent, period.
yup we should all be able, to talk to our kids instead of screaming at them.
People angrily replying to the top comment on this article are going to be pissed when they find out about this guy.
> Age verification is very hard, because parents will give their children their unlocked account, and children will steal their parents' unlocked account
More simply: If ID checks are fully anonymous (as many here propose when the topic comes up) then every kid will just have their friends’ older sibling ID verify their account one afternoon. Or they’ll steal their parents’ ID when they’re not looking.
Discussions about kids and technology on HN are very weird to me these days because so many commenters have seemingly forgotten what it’s like to be a kid with technology. Before this current wave of ID check discussions it was common to proudly share stories of evading content controls or restrictions as a kid. Yet once the ID check topic comes up we’re supposed to imagine kids will just give up and go with the law? Yeah right.
The older sibling should be old enough to know better. Or if they're still a kid, they can have their privileges temporarily revoked.
This problem probably can't be solved entirely technologically, but technology can definitely be a part of solving it. I'm sure it's possible to make parental controls that most kids can't bypass, because companies can make DRM that most adults can't bypass.
5 replies →
> If ID checks are fully anonymous (as many here propose when the topic comes up) then every kid will just have their friends’ older sibling ID verify their account one afternoon.
Exactly the same way that kids used in former days to get cigarettes or alcohol: simply ask a friend or a sibling.
By the way: the owners of the "well-known" beverage shops made their own rules, which were in some sense more strict, but in other ways less strict than the laws:
For example some small shop in Germany sold beverages with little alcohol to basically everybody who did not look suspicious, but was insanely strict on selling cigarettes: even if the buyer was sufficiently old (which was in doubt strictly checked), the owner made serious attempts to refuse selling cigarettes if he had the slightest suspicion that the cigarettes were actually bought for some younger person. In other words: if you attempted to buy cigarettes, you were treated like a suspect if the owner knew that you had younger friends (and the owner knew this very well).
Probably will limit to one device per person, to save the children, so we won’t share with others.
(So you need to keep all your stuff into one device to be fully tracked easily. And have no control over your device, share your location… )
Circumventing controls as a kid is what taught me enough about computers to get the job that made college affordable (in those days you could just boot windows to a livecd Linux distro and have your way with the filesystem, first you feel like a hacker, later the adults are paying you to recover data).
If we must have controls, I hope the process of circumventing them continues to teach skills that are useful for other things.
More simply: If ID checks are fully anonymous (as many here propose when the topic comes up) then every kid will just have their friends’ older sibling ID verify their account one afternoon. Or they’ll steal their parents’ ID when they’re not looking.
Digital ID with binary assertion in the device is an API call that Apple's app store curation can ensure is called on app launch or switch. Just checking on launch or focus resolves that problem. It's no longer the account being verified per se, it's the account and the use.
Completely agree. The internet works differently than how people want it to, and filtering services are notoriously easy to bypass. Even if these age-verification laws passed with resounding scope and support, what would stop anyone from merely hosting porn in Romania or some country that didn't care about US age-verification laws. The leads to run down would be legion. I think you could seriously degrade the porn industry (which I wouldn't necessarily mind) but it would be more or less impossible to prevent unauthorized internet users from accessing pornography. And of course that's the say nothing of the blast radius that would come with age-verification becoming entrenched on the internet.
> what would stop anyone from merely hosting porn in Romania or some country that didn't care about US age-verification laws
A government could implement the equivalent of China's great firewall. Even if it doesn't stop everyone, it would stop most people. The main problem I suspect is that it would be widely unpopular in the US or Europe, because (especially younger) people have become addicted to porn and brainrot, and these governments are still democracies.
10 replies →
It’s very odd to hear you complain about age verification but then be fine with ruining the porn industry
The only needed culture shift is everyone should realize that it's ultimately the parents/teachers' duty to educate the kids.
If parents think it's okay for their kids to use Facebook/X/whatever somehow responsibly, they should not be punished or prosecuted for that. Yes, I do believe it applies to alcohol too.
It's how it works in physical world. We let the parents to decide whether hiking/swimming/football/walking to the school are too dangerous for their kids. We let the parents to decide which books are suitable for their kids. But somehow when it comes to the internet it's the government's job. I can't help but think there is an astroturf movement manufacturing the consent rn.
Just a personal anecdote from my life - I have set up Youtube account for my kid with correct age restrictions (he is 11). Also this account is under family plan so there are no ads.
My kid logs out of this account so he can watch restricted content. I wonder - what is PG rating for logged out experience?
Age verification has always been about normalization of it, and about mitigating snow flake blame.
- I always give my children unlocked devices, because I know what they are doing
- Internet is not a safe space, and there will always be means of circumventing protections. Age verifications do not protect anybody
- Parents do not want to 'rise children' they give phones to kids and expects youtube to show kids only good stuff. They expect this from platform
Similar to alcohol, the system doesn't prevent all underage access
> If there's a fundamental culture shift,
You mean this culture shift is needed for the masses but I don't think that's the case. In my widest social circle I am not aware of anyone giving alcohol to young kids (yes by the time they are 16ish yes but even that's rare). Most guardians would willingly do similar with locked devices.
The real problem is that the governments/companies won't get to spy on you if locked devices are given to children only. They want to spy on us all. That's the missing cultural shift.
> Most guardians would willingly do similar with locked devices.
Considering the echo chamber in which I was at school, my friends would have simply used some Raspberry Pi (or a similar device) to circumvent any restriction the parents imposed on the "normal" devices.
Oh yes: in my generation pupils
- were very knowledgeable in technology (much more than their parents and teachers) - at least the nerds who were actually interested in computers (if they hadn't been knowledgeable, they wouldn't have been capable of running DOS games),
- had a lot of time (no internet means lots of time and being very bored),
- were willing to invest this time into finding ways to circumvent technological restrictions imposed upon them (e.g. in the school network).
The kids in your social circle are used to not having access to alcohol, but they're not used to not having access to social media.
Hypothetically, if every kid in your social circle had their device "locked", the adults would probably have a very hard time the kids away from their devices, or just relent, because the kids would be very unhappy. Although maybe with today's knowledge, most people will naturally restrict new kids who've never had unrestricted access, causing a slow culture shift.
And we need a standard where websites can self-rate their own content. Then locked devices can just block all content that isn't rated "G" or whatever.
I imagine there would be a set of filters, including some on by default that most adults keep for themselves. For example, most people don't want to see gore. More would be OK with sexual content, even more would be OK with swear words, ...
Wrong incentive. If you don’t give a shit about exposing children to snuff or porn, but do give a shit about page views and ad revenue, you obviously don’t rate your content or rate it as G to increase that revenue.
So kids can drive at 16. But can’t get access to an unlocked phone until their 18? Who gets to decide the whitelist? The government?
I never specified age.
The whitelist would be decided by the market: the parents have the unlocked device, there are multiple solutions to lock it and they choose one. Which means that in theory, the dominant whitelist would be one that most parents agree is effective and reasonable; but seeing today's dominant products and vendor lock-in...
> I don't see a solution that isn't a fundamental culture shift.
What shift?
I mean look, there's a point where the manufacturers back off and entrust the parents.
Any parent can be reckless and give their children all kinds of things - poison, weapons, pornographic magazines ... at some point the device has enough protective features and it is the parents responsibility.
Digital media use is easier to conceal than weapons. My parents did not protect me from it growing up because they were not responsible, and I was harmed as a result. To this day they still do not realize I was harmed, because I did not tell them and we are not on speaking terms. Trying to be honest would have resulted in further rejection from them. This was on a personality level and I had no way to deal with this as a developing human.
I could not control how my parents were going to raise me, I was only able to play with the hand I was dealt. I hate the idea that parents are sacrosanct and do not share blame in these situations. At the same time, if this is just the family situation you're given and you're handed a device unaware of the implications, who is going to protect you from yourself and others online if your parents won't? Should anyone?
>parents will give their children their unlocked account, and children will steal their parents' unlocked account.
I think either is better than the staus quo. In the first case the parent is waiving away the protections, and in the second the kid is.
Even if a kid buys alcohol, I think it's healthier that they do it by breaking rules and faking ids and knowing that they are doing something wrong, than just doing it and having no way to know it's wrong (except a popup that we have been trained by UX to close without reading (fuck cookie legislation))
That would be the status quo if we had better parental controls.
Trying to enforce parental controls via regulation may only be as effective as Europe enforcing the DMA against Apple. But maybe not, because there's a huge market; if Apple XOR Android does it, they'll gain market share. Or governments can try incentive instead of regulation (or both) and fund a phone with better parental controls. Europe wants to launch their own phone; such a feature would make it stand out even among Americans.
Prove of adulthood should be provided by the bank after logging into a bank account. I'm sure parents just would let their bank details be stolen and such.
Of course no personal details should be provided to the site that requests age confirmation. Just "barer of this token" is an adult.
The "Bank identity" system in Czech Republic (and likely other countries) can be used to log into to various government services. The idea is that you already authenticated to the bank when getting the account, so they can be sure it is really sou when you log in - so why not make it possible for you to log in to other services as well if you want to ?
1 reply →
So we trust a bank more than the government that they won’t extend this to earn more money by disclosing more information? Bad idea. You need a neutral broker.
2 replies →
Yes we need a fundamental shift where sharing of parent accounts is akin to atleast some sort of infraction or maybe even a misdemeanor.
This could help, but without the culture shift, way too many parents will intentionally and unintentionally break that law.
Just remove "parent" and "account" from the mix and all these. Tie the screen to the human and most of these challenges go away. This is what is trying to be achieved with these laws, so we may as well institute it that way.
I actually don't hate this??? As long as parents can set up their own whitelists and it's not up to the government to have the final say on any particular block.
Parents can do this today if they wanted to
The problem of "kids accessing the Internet" is a purposeful distraction from the intent of these laws, which is population-level surveillance and Verified Ad Impressions.
1 reply →
That is actually a very good solution that is respecting privacy. And is much more effective than asking everyone for ID when opening a website or app.
How does this solve the problem at all? You're just making more problems. Now you have to deal with a black market of "unlocked" phones. You're having to deal with kids sharing unlocked phone. Would police have to wal around trying to buy unlocked phones to catch people selling them to minors? What about selling phones on the internet, would they check ID now?
SOME parents give their children access to their ID. That is NOT the same as ALL parents, and therefore is not a reason not to give those parents a helping hand.
Even just informing children that they're entering an adult space has some value, and if they then have to go ask their parents to borrow their wallet, that's good enough for me.
It would not be solved without a culture shift. But with a culture shift, giving a kid an unlocked device would be as rare as giving them drugs.
I'm sure it will occasionally happen. But kids are terrible at keeping secrets, so they will only have the unlocked device for temporary periods, and I believe infrequent use of the modern internet is much, much less damaging than the constant use we see problems from today. A rough analogy, comparing social media to alcohol: it's as if today kids are suffering from chronic alcoholism, and in the future, kids occasionally get ahold of a six pack.
2 replies →
Definitively we should have constant verification of the current user with Face ID or similar tech. Every 5 minutes of usage, your camera is activated to check who’s using your phone and validates it. So much secure and safe. /s
This is Nirvana/Perfect Solution fallacy. That's like saying limiting smoking to 18 y/o was futile because teenagers could always have some other adult buy them cigs, or use fake IDs.
Ridiculous take.
Well, age verification is the "we have to do something about this nebulous problem even if the best thing we can think of actually makes everything worse for everyone but it makes us feel better" fallacy, which is equally ridiculous.
6 replies →
I'm saying that, in today's culture, age-gating the internet is likely to be much less effective than age-gating alcohol or tobacco. Most kids spend an appalling amount of time on social media (think, 5 hours/day*); most kids didn't spend this much time or invest this much of their lives into drugs.
* according to this survey from over 2 years ago: https://www.apa.org/monitor/2024/04/teen-social-use-mental-h...
2 replies →
For the smoking analogy to fit, you'd have to have parents giving their children packs of cigarettes to play with and then being mad at Marlboro they figured out how to smoke them.
We'll try everything, it seems, other than holding parents accountable for what their children consume.
In the United States, you can get in trouble if you recklessly leave around or provide alcohol/guns/cigarettes for a minor to start using, yet somehow, the same social responsibility seems thrown out the window for parents and the web.
Yes, children are clever - I was one once. If you want to actually protect children and not create the surveillance state nightmare scenario we all know is going to happen (using protecting children as the guise, which is ironic, because often these systems are completely ineffective at doing so anyway) - then give parents strong monitoring and restriction tools and empower them to protect their children. They are in a much better and informed position to do so than a creepy surveillance nanny state.
That is, after all, the primary responsibility of a parent to begin with.
I know this is weird, but I'm in some ways not really sure who is on the side of freedom here. I get your position, but like. The whole idea of the promise of the internet has been destroyed by newsfeeds and mega-corps.
There is almost literally documented examples of Facebook executives twirling their mustaches wondering how they can get kids more addicted. This isn't a few bands with swear words, and in fact, I think that the damage these social media companies are doing is in fact, reducing the independence teens and kids that have that were the fears parents originally had.
I dunno, are you uncertain about your case at all or just like. I just like, can't help but start with fuck these companies. All other arguments are downstream of that. Better the nanny state than Nanny Zuck.
> I just like, can't help but start with fuck these companies. All other arguments are downstream of that.
The solution would then be to break them up or do things like require adversarial interoperability, rather than ineffective non-sequiturs like requiring them to ID everyone.
The perverse incentive comes from a single company sitting on a network effect. You have to use Facebook because other people use Facebook, so if the algorithm shows you trash and rage bait you can't unilaterally decide to leave without abandoning everyone still there, and the Facebook company gets to show ads to everyone who uses it and therefore wants to maximize everyone's time wasted on Facebook, so the algorithm shows you trash and rage bait.
Now suppose they're not allowed to restrict third party user agents. You get a messaging app and it can send messages to people on Facebook, Twitter, SMS, etc. all in the same interface. It can download the things in "your feed" and then put it in a different order, or filter things out, and again show content from multiple services in the same interface, including RSS. And then that user agent can do things like filter out adult content, if you want it to.
We need to fix the actual problem, which is that the hosting service shouldn't be in control of the user interface to the service.
8 replies →
> ... start with fuck these companies. Better the nanny state than Nanny Zuck.
I'm not sure how those two positions connect.
Execs bad, so laws requiring giving those execs everyone's IDs, instead of laws against twirled mustaches?
4 replies →
> I just like, can't help but start with fuck these companies. All other arguments are downstream of that. Better the nanny state than Nanny Zuck.
Wild times when we're seeing highest voted Hacker News commenters call for the nanny state.
If you're thinking these regulations will be limited to singular companies or platforms you don't use, there is no reason to believe that's true.
There was already outrage on Hacker News when Discord voluntarily introduced limited ID checks for certain features. The invitations to bring on the nanny state reverse course very quickly when people realize those regulations might impact the sites they use, too.
A lot of the comments I'm seeing assume that only Facebook or other platforms will be impacted, but there's now way that would be the case.
2 replies →
>Better the nanny state than Nanny Zuck.
For me this is a crux, at least in principle. Once online media is so centralized... the from argument freedom is diminished.
There are differences between national government power and international oligopoly but... even that is starting to get complicated.
That said... This still leaves the problem in practice. We get decrees that age-restriction is mandatory. There will be bad compliance implementations. Privacy implications.
Meanwhile a while... how much will we actually gain when it comes to child protection.
You can come up will all sorts of examples proving "Facebook bad" but that doesn't mean these things are fixed when/if regulation actually comes into play.
Those execs were also using the tactics to addict adults, and while they may have targeted teens, the problem is, at its core: humans. So no amount of nannying by either the company nor the government will solve this issue.
Who would be responsible if a child developed alcohol addiction? A nicotine problem? Any other addiction?
Exactly. The same people that should be responsible for giving them unfettered access to an internet that is no longer safe. Even adults have to be wary of getting hooked on scrolling, and while I agree that the onus is on the companies, it has been demonstrated over and over again that they will not be held to account for their behavior.
So the only logical choice left that actually preserves freedom is for parents to get off their ass and keep their child safe. Parent's that don't use filtering and monitoring software with their children should be charged with neglect. They are for sending a kid into the cold without a coat, or letting them go hungry, why is it different sending them onto the internet?
And to your last point: You are dead wrong. No government anywhere in the world has demonstrated that they have the resources, expertise, or technical knowledge to solve this problem. The most famously successful attempt is the Chinese Great Firewall, which is breached routinely by folks. As soon as a government controls what speech you are allowed to consume, the next logical step for them is to restrict what speech you can say, because waging war on what people access will always fail. I mean, Facebook alone already contains tons of content that's against its terms of service, and they have more money than God, so either they actually want that content there, or they are too understaffed to deal with the volume, and the volume problem only ever increases.
So in my view, you are the one against freedom by advocating for the government to control the speech adults can access for the sake of "protecting the children" when the actual people that are socially, morally, and legally culpable for that protection are derelict in their duties.
8 replies →
Social media is like tobacco. We went after tobacco for targeting kids, we should do the same to social media. Highly engineered addictive content is not unlike what was done to cigarettes.
25 replies →
Screw over Meta then. Not everybody else.
Meta is the bozo in a panel van with no windows. All The legit porn sites put up Big Blinking Neon Signs.
4 replies →
> Better the nanny state than Nanny Zuck.
This is a huge self own. I can't believe I'm reading this on a website called "hacker news".
3 replies →
> Better the nanny state than Nanny Zuck.
why-not-both.jpg
Maximizing corporate freedom leads inevitably to corporate capture of government.
Opposing either government concentration of power alone or corporate concentration of power alone is doomed to failure. Only by opposing both is there any hope of achieving either.
Applying that principle to age-verification, which I think is inevitable: Prefer privacy-preserving decoupled age-verification services, where the service validates minimum age and presents a cryptographic token to the entity requiring age validation. Ideally, discourage entities from collecting hard identification by holding them accountable for data breaches; or since that's politically infeasible, model the service on PCI with fines for poor security.
The motivation for this regime is to prevent distribution services from holding identification data, reducing the information held by any single entity.
5 replies →
>There is almost literally documented examples of Facebook executives twirling their mustaches wondering how they can get kids more addicted.
Then close their business. Age verification just makes their crimes even more annoying.
1 reply →
> I just like, can't help but start with fuck these companies. All other arguments are downstream of that. Better the nanny state than Nanny Zuck.
How about we reject all institutional nannies?
It is much easier to implement user-controlled on-device settings than any sort of over-the-Internet verification scheme. Parents purchase their children's devices and can adjust those settings before giving it to their kids. This is the crux of the problem, and all other arguments are downstream of this.
1 reply →
Don't conflate the Internet with Social Media. Social media is a service, just like FTP. The death of social media will not mean the death of the Internet. There's an argument that reducing social media use, by age verification or other means, will lead to a more free Internet due to reduced power of gatekeepers.
> I know this is weird, but I'm in some ways not really sure who is on the side of freedom here. I get your position, but like.
No one. You’ll see a few politicians and more individuals stuck to their principles, but anyone with major clout sees the writing on the wall and is simply working to entrench their power.
> Better the nanny state than Nanny Zuck.
Indeed, what lolberts fail to understand usually is not a choice between government vs “freedom” it’s a choice between the current government and whoever will fill up the power vacuum left by the government.
The problem is that internet is used nowadays for democratic purposes. Once you introduce a globally unique personal ID, you will be monitored. And boy, you will be monitored throughoutly. In case of any democratic process that needs to be undertaken in future against government, this very government will take the tools of identification and will knock to the doors of people who try to raise awareness and maybe mutiny. And this is what Orwell wrote about
Centralization and standardization are going to be the topic in the 21st century.
>I get your position ... There is almost literally documented examples of Facebook executives twirling their mustaches wondering how they can get kids more addicted. This isn't a few bands
Their position was to compare it to alcohol, guns, and tobacco, not bands using naughty words. Alcohol and tobacco definitely enter mustache swirling territory, getting children addicted and funding misinformation on the harms of their product.
For all the complaining some U.S.-Americans seem to do about the EU approach to these issues, things like the Digital Markets Act aim to fix exactly these types of issues.
Heh, thank you.
I appreciate GPs point about giving “parents strong monitoring and restriction tools and empower them to protect their children”. That’s good. That acknowledges that we can and should give parents tools to deal with their kids and not let them fend for themselves (one nuclear family all alone) against the various algorithms, child group pressure, and so on.
But on the whole I’m tired of the road to serfdom framing on anything that regulates corporations.
Yes. Let’s be idealistic for a minute; the Internet was “supposed to” liberate us. Now we have to play Defense every damn day. And the best we have to offer is a false choice between nanny state and tech baron vulturism?
For a second just imagine. An Internet that empowers more than it enslaves. That makes us more equal. It’s difficult but you can try.
> I know this is weird, but I'm in some ways not really sure who is on the side of freedom here.
That’s because “freedom” is complicated and doesn’t precisely map to the interests of any of the major actors. Its largely a war between parties seeking control for different elites for different purposes.
1 reply →
> documented examples of Facebook executives twirling their mustaches wondering how they can get kids more addicted
If you genuinely believe that this is about those moustache twirling executives, then I have a bridge to sell you.
Have you ever wondered why and how these systems are being implemented? Have you ever gone why Discord / Twitch / what have you and why now? Have you ever thought that this might be happening because of Nepal and the fears of another Arab spring?
https://www.aljazeera.com/news/2025/9/15/more-egalitarian-ho...
I think too many people on this platform don't understand what this is about. This is about power. It's not about what's good for you or the children. Or for the constituents. It's about power. Real power. Karp-ian "scare enemies and on occasion kill them" power.
There are many ways in which such a system could be implemented. They could have asked people to use a credit card. Adult entertainment services have been using this as a way to do tacit age verification for a very long time now. Or, they could have made a new zero-knowledge proof system. Or, ideally, they could have told the authorities to get bent. †
Tech is hardly the first industry to face significant (justifiable or unjustifiable) government backlash. I am hesitant to use them as examples as they're a net harm, whereas this is about preventing a societal net harm, but the fossil fuel and tobacco industries fought their governments for decades and straight up changed the political system to suit them. ††
FAANG are richer than they ever were. Even Discord can raise more and deploy more capital than most of the tobacco industry at the time. It's also a righteous cause. A cause most people can get behind (see: privacy as a selling point for Apple and the backlash to Ring). But they're not fighting this. They're leaning into it.
Let's take a look at what Discord asked people for a second, the face scan,
Their specific ask is to try and get depth data by moving the phone back and forth. This is not just "take a selfie" – they're getting the user to move the device laterally to extract facial structure. The "face scan" (how is that defined??) never leaves the device, but that doesn't mean the biometric data isn't extracted and sent to their third-party supplier, k-Id.
There was an article that went viral for spoofing this, https://news.ycombinator.com/item?id=46982421 . In the article, the author found by examining the API response the system was sending,
The author assumes that "this [approach] is good for your privacy." It's not. If you give me the depth data for a face, you've given me the fingerprint for that face.
We're anthropomorphising machines. A machine doesn't need pictures; "a bunch of metadata" will do just fine.
We are assuming that the surveillance state will require humans sitting in a shadow-y room going over pictures and videos. It won't. You can just use a bunch of vectors and a large multi-modal model instead. Servers are cheap and never need to eat or sleep.
Certain firms are already doing this for the US Gov, https://x.com/vxunderground/status/2024188446214963351 / https://xcancel.com/vxunderground/status/2024188446214963351
We can assume de facto that Discord is also doing profiling along vectors (presumably behavioral and demographic features) which that author described as,
Discord plugs into games and allows people to share what they're doing with their friends. For example, Discord can automatically share which song a user is listening on Spotify with their friends (who can join in), the game they're playing, whether they're streaming on Twitch etc.
In general, Discord seems to have fairly reliable data about the other applications the user is running. Discord also has data about your voice and now your face.
Is some or all of this data being turned into features that are being fed to this third-party k-ID? https://www.k-id.com/
https://www.forbes.com/sites/mattgardner1/2024/06/25/k-id-cl...
https://www.techinasia.com/a16z-lightspeed-bet-singapore-par...
k-ID is (at first glance) extracting fairly similar data from Snapchat, Twitch etc. With ID documents added into the mix, this certainly seems like a very interesting global profiling dataset backstopped with government documentation as ground truth.
I'm sure that's totally unrelated. :)
-
† like they already have for algorithmic social media and profiling, https://www.newyorker.com/magazine/2024/10/14/silicon-valley...
Somehow there's tens to hundreds of millions available for crypto causes and algorithmic social media crusades, but there's none for the "existential threat" of age verification.
†† Once again, this is old hat. See also: Turbotax, https://www.propublica.org/article/inside-turbotax-20-year-f...
1 reply →
You sound like someone that would work for the CIA or FBI if they offered you a job. Those are the types of people that I cannot and will not ever trust. I do not respect your opinion.
1 reply →
I don't get your point, at least not in relation to the GP post. I agree with GP, parents need to be more accountable. We as parents, and We should all be concerned about future children/generations, should be demanding more regulation to help force the change we need on this topic. We as a society need to treat SM like those other addictive product classes. The fact SM is addicting and execs try to juice it more, is frankly to be expected.
Vilify them all you want, but same has been done with nicotine products, alcohol products, etc. and to GPs point, we SM as a toy for our children to play with. We chose to change the rules (laws, regulations, etc) because capitalists can never be simply trusted to do what's best for anything except their bottom line. That's a fundamental law no different than inertia or gravity in a capitalistic society. That's why regulators exist. Until you regulate it, they will wear their villain badge and rake in the billions. It's easy to be disliked when the topic of your disdain is what makes you filthy rich (in other words, they don't care what you or I think of what they're doing).
1 reply →
> There is almost literally documented examples of…
lol
> Better the nanny state than Nanny Zuck.
The state can imprison you. Zuck can't.
1 reply →
Nobody is putting a gun to your head and forcing you to use Facebook or whatever other site. I quit using most social media over a decade ago. If you don't want to use it, or you don't want your children to use it, then don't use it.
1 reply →
I am a bit confused by that comment. Are parents social responsible to prevent companies from selling alcohol/guns/cigarettes to minors? If a company set up shop in a school and sold those things to minors during school breaks, who has the social responsibility to stop that?
when I was a kid in the early 90's, my state (and many others) banned cigarette vending machines since there was no way to prevent them being used by minors, unless they were inside a bar, where minors were already not allowed.
8 replies →
I think the argument is more around it being illegal so as to not be forced into playing "the bad guy". It's hard to prevent a level of entitlement and resentment if those less well parented have full access. If nobody is allowed then there's no parental friction at all.
Its unfortunate that the application of this rule is being performed at the software level via ad-hoc age verification as opposed to the device level (e.g. smartphones themselves). However that might require the rigimirole of the state forcibly confiscating smartphones from minors or worrying nepalise outcomes.
9 replies →
Parents can't easily prevent their kids from going to those kinds of stores once they're at the age where the parent doesn't need to keep an eye on them all the time and they can travel about on their own.
The difference though is that parents are generally the ones to give their kids their phones and devices. These devices could send headers to websites saying "I'm a kid" -- but this system doesn't exist, and parents apparently don't use existing parental controls properly or at all.
10 replies →
ISPs and OSs should be the ones providing these tools and make is stupid easy to set up a child's account and have a walled garden for kids to use.
39 replies →
The school, in loco parentis.
Not responsible for selling to all minors, just theirs.
Well the parents entrust their kids to the school, so they would be the ones responsible for what goes on on their premises. In turn, school computers are famously locked down to the point of being absolutely useless.
1 reply →
Companies are legally prohibited from marketing and selling certain products like tobacco and alcohol because they historically tried to.
Parents are legally and socially expected to keep their kids away from tobacco and alcohol. You're breaking legal and social convention if you allow your kids to access dangerous drugs.
Capitalist social media is exactly as dangerous as alcohol and tobacco. Somebody should be held responsible for that, and the legal and social framework we already have for dealing with people who want to get kids addicted to shit works fairly well.
7 replies →
> give parents strong monitoring and restriction tools
The problem is that it's bloody hard to actually do this. I'm in a war with my 7yo about youtube; the terms of engagement are, I can block it however I want from the network side, and if he can get around it, he can watch.
Well, after many successful months of DNS block, he discovered proxies. After blocking enough of those to dissuade him, he discovered Firefox DNS-over-HTTPS, making it basically impossible to block him without blocking every Cloudflare IP or something. Would love to be wrong about that, but it seems like even just blocking a site is basically impossible without putting nanny-ware right on his machine; and that's only a bootable Linux USB stick away from being removed unless I lock down the BIOS and all that, and at that point it's not his computer and the rules of engagement have been voided.
For now I'm just using "policy" to stop him, but IMO the tools that parents have are weak unless you just want your kid to be an iPad user and never learn how a computer works at all.
As a parent of young children, this is your entire problem:
> the terms of engagement are, I can block it however I want from the network side, and if he can get around it, he can watch.
You're treating this as a technical problem, not a parental rules problem. Your own rules say he's allowed to watch!
You have to set the expectations and enforce it as a parent.
2 replies →
Sounds like a smart kid, is part of you secretly proud of him for his tenacity?
Is it impractical to keep an eye on what he's doing on his computer, i.e. physically checking in on him from time to time?
How about holding him responsible for his own behavior, to develop respect for the rules you impose? Is it just hopeless, and if so how come? Is it impossible for him to understand why you don't want him watching certain content or why he should care about being worthy of your trust?
I'm not judging here, I'm genuinely curious.
1 reply →
I remember when I was a kid that age there were rules and some were technically enforced. But if you found a way around the technical enforcement you were in huge trouble. The equivalent here would he been, if you used a proxy to watch what you weren't meant to, then you lose all screen time indefinitely. Sneaking around parents' rules was absolutely not on.
The obvious solution would be TLS interception and protocol whitelisting. Same as corporate IT. Stick the kids' devices on a separate vLAN if you don't want to catch all the other devices in the crossfire.
Still, there's an awful lot of excellent educational content on YouTube. It seems unfortunate to block access to that. Have you considered self hosting an alternative frontend for it?
At this point why not just emancipate him. Hook him up with an easy remote job, put a lock on his bedroom and hand him the keys, and make him start paying rent. Because I’m having trouble figuring out what part of society you’re preparing him for at this stage. Respectfully.
Putting controls on the machine you want to restrict is pretty normal. While I agree with your first sentence that it's hard for parents to get proper monitoring tools, the rest of this sounds like a self-imposed problem. If you don't want to mess with the actual machine then run a proxy it has to use.
Whats so hard about taking the iPad out of they're hands? or laptop or whatever, once you catch them on sites they shouldnt be on?
You're understating the US's policy on recklessness. We have "attractive nuisances," which means that if you put a trampoline in your backyard, and a kid passing through sees it, decides to do a sick jump off of it, and breaks their leg, that was partly your fault for having something so awesome that kids would probably like.
> which means that if you put a trampoline in your backyard, and a kid passing through sees it, decides to do a sick jump off of it, and breaks their leg, that was partly your fault for having something so awesome that kids would probably like.
That's not exactly accurate. The two key parts of the attractive nuisance law are a failure to secure something combined with the victim being too young to understand the risks.
So if you put a trampoline in your front yard, that's an easy attractive nuisance case.
If you put a pool in your back yard with a fence and a locked gate, it would be much harder to argue that it was an attractive nuisance.
If a 17 year old kid comes along and breaks into your back yard by hopping a 6-foot tall fence, you'd also have a hard time knowing they didn't understand that their activities came with some risk. Most cases are about very young children, though there are exceptions
1 reply →
It would not be quite that simple. The trampoline (or pool, or whatever) would have to be visible, in a place children were likely to be, not protected by any reasonable amount of care, and the kid would have to be young enough to not know any better.
The legal doctrine is also not specific to the US, of course.
And that law is incredibly and hideously stupid, as it's a heckler's veto on having cool stuff.
The Internet is basically the final frontier where this harmful law doesn't reach, though the Karens are really trying to expand their power there.
As a parent, I think you’re understating how difficult it is to provide a specific amount of internet access (and no more) to a motivated kid. Kids research and trade parental control exploits, and schools issue devices with weak controls whether parents like it or not. I’m way at the extreme end of trying to control access (other than parents who don’t allow any device usage at all) and it has been one loophole after another.
As somebody who is entirely for restrictions on internet / social media, I think you're missing the bigger picture here. First, you assume that parents have the technical knowhow to restrict their kids from specific sites. My parents used a lot of different tools when I was a kid, but between figuring out passwords, putting my fingerprint onto my mom's phone, and spoofing mac addresses, I always found a way around the restrictions so I could stay up later.
But let's assume the majority of parents can actually do this. The problem with social media is not an individual one! We've fallen into a Nash Equilibrium, a game theory trap where we all defect and use our phones. If you don't have a phone or social media nowadays you will have much more trouble socializing than those who do, even though everyone would be better off if nobody used phones. As a teenager, you don't want to be the only one without a phone or social media. And so I truly do think the only solution is with higher level coordination.
Now, it's possible that the government isn't the right organization to enforce this coordination. Unfortunately, we don't really have any other forms of community that work for this. People already get mad at HOA's for making them trim their lawn; imagine an HOA for blocking social media! I do think the idea of a community doing this would be great though, assuming (obviously) that it was easy to move on and out of, as well as local. This would also help adults!
So to be honest, I don't think parents have the individual power to fix this, even with their kids.
It's much easier to give individual users control over their own device than to give a centralized authority control over what happens on everyone's device over the Internet. Local user-controlled toggles are just easier to implement.
All parental moderation mechanisms can and should be implemented as opt-in on-device settings. What governments need to do is pressure companies to implement those on-device settings. And what we can do as open-source developers is beat them to the punch. Each parent will decide whether or not to use them. Some people will, some won't. It's not Bob's responsibility to parent Charlie's children. Bob and Charlie must parent their own children.
To the people arguing that parents are too dumb to control their children's tech usage because they themselves are tech-illiterate: millennia ago, we invented this new thing called fire. Most people were also "too dumb" to keep their children away from the shiny flames. People didn't know what it was or how dangerous it could be. So the tribe leader (who, by the way, gropes your children) proposed a solution: centralize control of all the fire. Only the tribe leader gets to use it to cook. Everyone else just needs to listen to him. Remember, it's all for you and your children's safety.
> we invented this new thing called fire [...] So the tribe leader (who, by the way, gropes your children) proposed a solution: centralize control of all the fire
Of all the things, a "save-the-children prolegomena to the Prometheus myth" certainly wasn't on my bingo card today. So thank you for that, but I'm not aware of any reports of fire-keeping in the way you've described. Societies and religions do have sacred traditions related to fire (like Zoroastrians) but that doesn't come with restrictions on practical use AFAIK.
1 reply →
Am I the only one that is repeatedly amused at how many smart people are just caving to making this about parents/children at all?
We've literally watched things unfold in real time out in the open in the last year I don't know how much more obvious it could be that child-protections are the bad-faith excuse the powers that be are using here. Combined with their control of broadcasting/social media, it's the very thing they're pushing narratives in lockstep over. All this to effectively tie online identities to real people. Quick and easy digital profiles/analytics on anyone, full reads on chat history assessments of idealogies/political affiliations/online activities at scale, that's all this ever was and I _know_ hackernews is smart enough to see that writing on the wall. Ofc porn sites were targeted first with legislation like this, pornography has always been a low-hanging fruit to run a smear campaign on political/idealogical dissidents. It wasn't enough, they want all platform activity in the datasets.
I can't help but feel like the longer we debate the merits of good parenting, the faster we're just going to speedrun losing the plot entirely. I think it goes without saying that no shit good parenting should be at play, but this is hardly even about that and I don't know why people take the time of day. It's become reddit-caliber discussion and everyone's just chasing the high of talking about how _they_ would parent in any given scenario, and such discussion does literally nothing to assess/respond to the realities in front of us. In case I'm not being clear, talking about how correct-parenting should be used in lieu of online verification laws is going to do literally nothing to stop this type of legislation from continually taking over. It's not like these discussions and ideas are going to get distilled into the dissent on the congressional floors that vote on these laws. It is in it's own way a slice of culture war that has permeated into the nerd-sphere.
4 replies →
This only works if I ban my child from having any friends since they all have unlimited mobile access to the internet.
Could your child not just call or text their friends? Or is the real expectation to not have to intervene at all about their preferred platform?
5 replies →
Sorry, I know it's a hard line for parents to tread and it's really easy to criticize parenting decisions other people are making, but the "everyone else is doing it so I have to" always seems as lazy to me today, as it probably did to my parents when I said it to them as a teenager.
Is it more important to prevent your son from being weaponized and turned into a little ball of hate and anger, and your daughter from spending her teen years depressed and encouraged to develop eating disorders, or to make sure they can binge the same influencers as their "friends"?
4 replies →
Yes if they do bad things like drunk, have sex and do drugs.
I would start with banning cellphones.
1 reply →
this is the biggest problem, so many parents are head-in-the-sand when it comes to things that can damage a child’s mind like screen time, yet no matter how much you protect them if it’s not a shared effort it all goes out the window, then the kid becomes incentivized to spend more time with friends just for the access, and can develop a sense that maybe mom and dad are just wrong because why aren’t so-and-so’s parents so strict?
because their parents didn’t read the research or don’t care about the opportunity cost because it can’t be that big of a deal or it would not be allowed or legal right? at least not until their kid gets into a jam or shows behavioral issues, but even then they don’t evaluate, they often just fall prey to the next monthly subscription to cancel out the effects of the first: medication
1 reply →
Individual Parents vs Meta Inc (1.66T mkt cap)
May the best legal person win!
Its more like age verification corporations, identity verification corporations, the child "safety" organizations that were lobbying for Chat Control versus individuals who want to protect their privacy.
A monitoring solution might have worked for my case if my parents had monitored my Internet history, if they always made sure to check in on what I thought/felt from what I watched and made sure I felt secure in relying on them to back me up in the worst cases.
But I didn't have emotionally mature parents, and I'm sure so many children growing up now don't either. They're going to read arguments like these and say they're already doing enough. Maybe they truly believe they are, even if they're mistaken. Or maybe they won't read arguments like these at all. Parenting methods are diverse but smartphones are ubiquitous.
So yes, I agree that parents need to be held accountable, but I'm torn on if the legal avenue is feasible compared to the cultural one. Children also need more social support if they can't rely on their parents like in my case, or tech is going to eat them alive. Social solutions/public works are kind of boring compared to technology solutions, but society has been around longer than smartphones.
Should the state have force your parents to give you up for adoption? That's the social support the state can offer.
3 replies →
> We'll try everything, it seems, other than holding parents accountable for what their children consume.
The mistake in this reasoning is assuming that they are actually interested in protecting the children.
This.
The world is becoming increasingly more uncertain geopolitically. We have incipient (and actual) wars coming, and near term potential for societal disruption from technological unemployment. Meanwhile social media has all but completely undermined broadcast media as a means of social control.
This isn't about protecting children. It's about preventing a repeated of the Arab Spring in western countries later this decade.
"Think of the children" is the oldest trick in the book, and should always be met with skepticism.
1 reply →
GP isn't interested in protecting children either. Punishing parents harder does nothing to improve the lives of children — in fact it makes them much worse, because now they are addicted to Facebook and their parents are in jail. It just makes certain people feel morally righteous that someone got punished.
"We'll try everything, it seems, other than holding parents accountable for what their children consume".
We'll try anything, it seems, other than hold internet companies accountable for the society destroying shit they publish.
And it's not jusy children who's lives they are destroying.
If you are interested in learning about the other perspective, you can watch some parents’ congressional testimony here https://youtu.be/y8ddg4460xc?si=-yYduYDppF4TQWqD.
The character.ai one is gut wrenching.
> then give parents strong monitoring and restriction tools and empower them to protect their children
I think this is the right way to solve the problem.
For example, I think websites should have a header or something that indicates a recommended age level, and what kinds of more mature content and interactions it has, so that filters can use that without having to use heuristics.
I agree with you that parents should be responsible, but your argument is clearly flawed.
> you can get in trouble if you recklessly leave around or provide alcohol/guns/cigarettes for a minor to start using
In the example here, there are 3 things where age verification is required AND parents have responsibility.
It’s not just one or the other.
The same responsibilities are not “thrown out”, they are never acknowledged in the first place.
As a parent blocking websights is a joy, maybe the rule should be to allow guardians more ability to control that. Trying to block some services is not trivial
As a human, I'd love to see the rest of you fools quit that. If HN ever starts to algorithm me I'll be gone too.
And as a bonus you can block your boomer parent's access to cnn and msnbc (or whatever its called now) and perhaps fox. It will make Thankgiving a lot more pleasant for all.
PS Mom, I don't know why cnn doesn't work anymore. ;)
2 replies →
It's very easy to lock up alcohol/cigarettes, a child should never have access. Internet usage is more like broadcast media, a child should have regular access.
The positives and negatives of Internet usage are more extreme than broadcast media but less than alcohol/guns. The majority of people lack the skills to properly censor Internet without hovering over the child's shoulder full-time as you would with a gun. Best you can do is keep their PC near you, but it's not enough.
We agree that a creepy surveillance nanny state is not the solution, but training parents to do the censorship seems unattainable. As we do for guns/alcohol/cigarettes, mass education about the dangers is a good baseline.
EDIT: And some might disagree about never having access to alcohol!
Devices such as phones come with an option when you start the device asking simply is this for a child or an adult. Your router generally these days comes with a parental filter option on start up too. Heck we have chatgpt that can guide a parent through setting up a system if they want something more custom.
If people want to push, they should just push to make these set up options more ubiquitous, obvious and standardized. And perhaps fund some advertising for these features.
2 replies →
This is where Apple, Microsoft and Android need to step up. Indeed they already have in many ways with things being better than they used to be.
There needs to be a strict (as in MDM level) parental control system.
Furthermore there needs to be a "School Mode" which allows the devices to be used educationally but not as a distraction. This would work far better than a ban.
3 replies →
Ironically, the government that is pushing this only set a drinking age just a couple of years ago (as in the last 10 years). In case you believed this was actually about kids.
The speech that worked (mostly) on the children in my life involved the concept of 'cannot unsee', which they seemed to understand. There are some parallels to gun safety here because there are things that even the adults in your life try not to do and it seems perfectly reasonable to expect the same from children.
In fact being held to a standard that adults hold themselves to is frequently seen as a rite of passage. I'm a big girl now and I put on my big girl pants to prove it.
parents can be held liable for buying their kids cigarettes but, similarly, tobacco companies are (at least nominally) not supposed to target children in their advertising campaigns and in the design of their products.
It's obviously not a 1/1 comparison here, because providing ID to access the internet is not analogous to providing ID to purchase a pack of Cowboy Killers but we can extrapolate to a certain extent.
(inb4 DAE REGULATING FOR-PROFIT CORPORATIONS == NANNY STATE?!?!?!?!?)
The richest brightest minds of our generation all being motivated towards one addictive goal, and we'll just put the responsibility on the parents...I think society can collectively do better.
What? Are there billion dollar companies with huge staffs who are constantly trying to figure out how to sneak my child a gun all the time, at school, wherever they go?
I'd say this comparison is good -- we as a society have decided that people who provide alcohol, guns, and cigarettes are responsible if children are provided them. You don't get to say 'hey, you didn't watch your child, they wandered into my shop, I sold to them 2 liters of vodka and a shotgun'.
You can expect the individual to compensate for a poorly structured society all you want.
> you can get in trouble if you recklessly leave around or provide alcohol/guns/cigarettes for a minor to start using
You can only expect so much from individual responsibility. At some point you need to structure society to compensate for the inevitable failures that occur.
> They are in a much better and informed position to do so than a creepy surveillance nanny state.
I'd rather live in a nanny state than ever trust american parenting. We've demonstrated a million times over that that doesn't work and produces even more fucked up people and abused children.
If we expect Parents to treat Social Media like other unhealthy, dangerous, and highly addictive products, then that can never start with "just expect ignorant parents to all magically start doing something difficult, for no real reason".
It starts by banning kids from the internet, entirely. It starts with putting age restrictions on who can buy internet connected devices. It starts by arresting parents and teachers who hand pre-literacy kids an always-online iPad. It starts with an overwhelming propaganda campaign: Posters, Commercials, After-School Specials, D.A.R.E. officers, red ribbon week.
Then, ultimately, it still finishes with an age-gated internet where every adult is required to upload their extremely valuable personal information to for-profit companies, for free (With the added weight of being forced to agree to extreme ToS, like arbitration agreements).
So what do we do? I agree that the age of entry to the internet should match other vices (currently 21+ in the US, although really that should probably be 18+)...
It will never be acceptable for a single country's police state to extend across international borders, so... we just ban all of the UK and Australia from every web service until they get withdrawals and promise to stay nice? That could be a start.
But this whole situation in like 'freedom of speech' once you start picking and choosing what counts as "acceptable" speech, then suddenly you lose everything. You literally can't make everyone happy, because everything subjective is open to contradiction - and because there are freaks in the world who will never be satisfied by anything less than a complete global ban of everything.
Who gets a say? Do the Amish get to tell us what we are allowed to do? Where do you draw the line? You can't. Completely open is the only acceptable choice. But I still vote we start publicly mocking the parents who give their kids an ipad, and treat them like they just gave that kid a cigarette. Because seriously, they're ruining that kid.
I've already thought about it from the US's perspective and here's my path forward.
If government does not want kids to have access to the naughty bits of the internet but thinks there's something worth sharing with children then the government should provide a public internet for kids and THATS the site that will ask for a login known to belong to a kid. We already do public schooling with public funding and we do not let rando adults sit in classrooms with kids and they get a school id. Boot <18's off the public internet AT THE SOURCE when internet connectivity is PURCHASED / CONTRACTED FOR with a valid adult id / proof of age, but allow them vpn access into whatever the government thinks the child should have access to, like the schools page, I would say online encyclopedias or wikipedia type things but I'm not sure if government wants children to read about the variety of so many different things on this planet we're sorta trapped on and lets face it, restricting communications of the kids to points outside the control of parents is exactly what the government is complaining about, the government does not want kids to have free access to information.
Think of a phone or tablet that can only access the network through either a proxy or vpn but otherwise locked down. It certainly seems like it doesn't require much programming, heck have trump vibe code it for all I care.
I mean yeah, parents could just teach their kids the tough stuff because thats how it used to work anyways, well that and the libraries and schools but those can be pruned of bad books and bad teachers at the request of government anyways right? The kids could also be interviewed periodically by government to inventory what topics they have discussed with their parents to weed out the 't' or 'g' words.
I mean yeah I don't see a place for facebook in that intranet but isn't that sort of the point, we all know big social media will be incentivised to promote engagement with less regard for safety, so why do kids need facebook anyways? The instagrams and ticktalks are worse although maybe government should make a child friendly ticktalk type school social network, call it trumps school for kids for all I care, folks in power right now and a significant part of the US believe that trump knows whats good for kids right?
I mean obviously the libraries have to be REALLY REALLY cleaned up but thats just a detail. But why are parents forcing internet wierdos onto their kids with these smartphones / porno studios in their pockets? What do they think chester the molester on ticktalk is gonna have the kid upload their id? even if he does, do we really want that? c'mon man
The difference with guns, tobacco and alcohol is huge: all negatives aside, giving kids what they want makes the life of a parent so much easier. Take it away and many parents will fight. Sugar is in the same game.
I'm 40. Do I need to get my parents to vouch for me? Who vouches for them?
> then give parents strong monitoring and restriction tools
As written, this sounds very glib. I cannot take this comment seriously without a game theory scenario with multiple actors.
Should we allow kids to get cigarettes? Cocaine? Should all parents "just" be better parents and problem solved?
What is your proposed recourse for me as a parent if your kid shows my kid gore videos?
you can’t blame it on parents alone, but the odds are stacked against children and their parents, there are very smart people whose income depends on making sure you never leave your black mirror
the surveillance state is possible, achievable, and a few coordination games away from deployment with backing from a majority who should know better
inertia kills, I dunno
The greatest of uphill battles in today's current climate is trying to push anything in the realm of personal responsibility.
Politicians' whole basis for nearly every campaign is "you're helpless, let us fix it for you."
For the vast majority of problems plaguing society, the answer isn't government, it's for people changing their behavior. Same goes for parenting.
But unfortunately, "you're an adult, figure it out" isn't the greatest campaign slogan (if you want to win).
It wasn’t always this way: “Ask not what your country can do for you — ask what you can do for your country”
[dead]
The vast majority of parents aren't tech-savvy enough to be able to operate IT parental controls.
Yes, children are clever - I was one once.
A counterargument to your point that children are clever - I was also one once.
Except companies provide wholly inadequate safeguards and tools. They are buggy, inconsistent, easily circumvented, and even at time malicious. Consumers should be better able to hold providers accountable, before we start going after parents.
The only real solution is to keep children off of the internet and any internet connected device until they are older. The problem there is that everything is done on-line now and it is practically impossible to avoid it without penalizing your child.
If social media and its astroturfers want to avoid outright age bans, they need to stop actively exploiting children and accept other forms of regulation, and it needs to come with teeth.
How easy is it for kids to bypass Parental Controls on iOS devices?
4 replies →
> Except companies provide wholly inadequate safeguards and tools. They are buggy, inconsistent, easily circumvented, and even at time malicious. Consumers should be better able to hold providers accountable, before we start going after parents.
We could mandate that companies that market the products actually have to deliver effective solutions.
1 reply →
Yes, but how on earth is their malicious compliance at providing parental controls a good reason to go for the surveillance state that hurts absolutely everyone?
Social media operators love the surveillance state idea. That's why they aren't pushing against this.
I even cancelled YT Premium because their "made for kids" system interfered with being able to use my paid adult account. I urge other people to do the same when the solutions offered are insufficient.
Step 0 is physical device access. Kids shouldn't have tablets or smartphones or personal laptops before age 16.
21 replies →
The parents themselves weren't raised with the digital literacy required.
This doesn't put the parents off the hooks, if you or anyone can share any resources that are as easily consumable, viral and applicable as the content that is the issue that can reach parents I would be happy to help it spread.
The reality is kids today are facing the most advanced algorithms and even the most competent parents have a high bar to reach.
The solution is simple.
I want to permit whatever the pixels are on a childs screen. Full stop. That hasn't been solved for a reason. Because developing such a gate would work and not allow algorithms to reach kids directly and indirectly.
The alternative is not ideal, but until there's something better, what it will be and that's well proven for the mental health side of things of raising resilient kids who don't become troubled young adults - no need for social media, or touch screens until 10-13.
There are lots of ways to create with technology, and learning to use words (llms) and keyboards seems to increasingly have merit.
> "The parents themselves weren't raised with the digital literacy required."
At this point, that isn't true anymore. There was social media when the parents were school aged. The world didn't start when you were 10 and the Internet is a half century old.
1 reply →
> then give parents strong monitoring and restriction tools and empower them to protect their children.
Because parents don’t abuse massive surveillance tools.
Given that most abuse happens in the family and by parents maybe it’s a bad idea to give them so much power
So we should trust the governments of the world? The same governments that don't seem to be doing anything about a large group of people that visited a specific island to abuse minors?
2 replies →
don't you have to age verify to get alcohol? We don't leave that up to the parents. Feels like you defeated your own argument with your examples.
The internet is not an object, its a communication medium. That is an apples to organges argument, it doesn't wash.
> We'll try everything, it seems, other than holding parents accountable for what their children consume.
It’s not a fair fight. These are multi-billion dollar companies with international reach and decades of investment and research weaponized against us to make us all little addicts.
Additionally, it’s not fair or reasonable to ask parents to screen literally everything their kids do with a screen at all times any more than it was reasonable for your parents to always know what you were watching on TV at all times.
This is bootstraps/caveat emptor by a different name. It’s not “I want someone else to raise my kids.” It’s “the current state of affairs shouldn’t be so hostile that I have to maintain constant digital vigilance over my children.” Hell if you do people then lecture you about how “back in their day they played in the street and into the night” and call you a helicopter parent
You are screwed but not for the reason you claim. Its because you don't take any accountability for yourself. There is/was no hope for someone who does that at any point in human history. Is it fair? Nope...but it also doesn't mean you have 0 autonomy.
1 reply →
None of this push has anything to do with protecting children. Never has, never will. Stop helping them push the narrative, it's making the problem WAY worse.
ITT are a lot* of tech workers who made their money as a cog in the system poisoning the internet that future generations would have to swim in. I wonder if toxic waste companies also tell the parents it's strictly on them to keep their kids out of the lakes that are poisoned, but once flowed cleanly?
We live in a shared world with shared responsibilities. If you are working on a product, or ever did work on a product, that made the internet worse rather than better, you have a shared responsibility to right that wrong. And parents do have to protect their kids, but they can't do it alone with how systematically children are targeted today by predatory tech companies.
Age verification tech companies are lobbying heavily for governments to legally require their services. The proposed "solutions" are about funneling money into the hands of other tech companies and shady groups, while violating user privacy.
If anything, we should be banning the collection of any age related information to access social media and more mature content. We need companies to respect privacy, rather than legislation even more privacy violations.
16 replies →
If your goal is to "save the children", then sure, we can discuss this... if your goal, as a government, is to have everyone get some digital ID and tie their online identities to their real names, then you do just that.
We'll do everything, it seems, other than holding billionairs accountable for what their businesses consume.
We should stop pretending these age-verification rollouts are about protecting children, because they aren't and never have been.
Even if the world was full of responsible parents, there are still people and groups that want to establish a surveillance state. These systems are focused on monitoring and tracking online activity / limiting access to those who are willing to sacrifice their own personal sovereignty for access to services.
There is most definitely a cult that is obsessed with the book of revelation and seeing Biblical prophecy fulfilled, and if that isn't readily obvious to folks at this juncture in time, I'm not sure what it will take. I guess they'll have to roll out the mark of the beast before people will be willing to admit it.
It's funny, all the bible wankers screamed about "the mark of the beast" over things like RealID. Now we have fascists setting up surveillance and censorship tools to tie speech and movement to centralized ID...and they're lining up to lick boots.
You should need to show ID and prove you're over 18 to enter a church. At least we know they're actually harmful to children.
2 replies →
Excellent example of low effort cookie cutter empty rhetoric that would fit perfectly in reddit.
Do you have kids?
>> In the United States, you can get in trouble if you recklessly leave around or provide alcohol/guns/cigarettes for a minor to start using, yet somehow, the same social responsibility seems thrown out the window for parents and the web.
So anyone can walk into a shop and purchase these things unrestricted? It's not the responsibility of the seller too?
Tobacco, yes you can order pipe tobacco and cigars online sent to your house without ID.
Guns yes, you can buy a schmidt-rubin cartridge rifle or black powder revolver sent straight to your home from an online (even interstate) vendor no ID or background check, perfectly legal.
Alcohol yes, you can order wine straight to your house without ID.
These are all somewhat less known "loopholes" but not really turned out to be a problem despite no meaningful controls on the seller. You probably didn't even know about these loopholes, actually -- that's how little of a problem it's been.
It’s weird that you blame the victim.
The real question is why do we leave it to parents or intrusive surveillance instead of holding companies accountable?
>We'll try everything, it seems, other than holding parents accountable
The government took over most parenting functions, one at a time, until the actual parent does or is capable of doing very little parenting at all. If the government doesn't like the fact that it has become the parent of these children, perhaps it shouldn't have undermined the actual parents these last 80 years. At the very least, it should refrain from usurping ever more of the parental role (not that there is much left to take).
You yourself seem to be insulated from this phenomena, maybe you're unaware that it is occurring. Maybe it wouldn't change your opinions even if you were aware.
>If you want to actually protect children
What if I don't want to protect children (other than my own) at all? Why would you want to be these children's parents (you suggest you or at least others want to "protect" them), which strongly implies that you will act in your capacity as government, but then get all grumpy that other people are wanting to protect children by acting in their capacity of government?
The expectations on parents in USA are at their historical high. What are you talking on about in here. The expectation that parents will perfectly supervise them at every moment of their life till their adulthood is a.) new b.) at its historical max.
> holding parents accountable for what their children consume
There is a local dive bar down the street. I haven't expressly told my kids that entering and ordering an alcoholic drink is forbidden. In fact, that place has a hamburger stand out front on weekends and I wouldn't discourage my kids from trying it out if they were out exploring. I still expect that the bartender would check their ID before pulling a pint for them.
It takes a village to raise a child. There are no panopticons for sale the next isle over from car seats. We are doing our best with very limited tooling from the client to across the network (of which the tremendously incompetent schools make a mockery with an endless parade of new services and cross dependencies). It will take a whole of society effort to lower risks.
also there's a huge argument to be made that surveilling your kids is really really bad for their development
1 reply →
That same argument doesn't hold water on the internet. Its a communication medium. Its like a flow of information. You don't enter or leave physically spaces. the information flows to you where ever you are. trying to apply the same kinds of laws to the internet is a recipe for disaster because you are effecting everyone at the same time.
2 replies →
We live in a technofeudalist society now, we're all at the whims of the tech corps
age verification doesn't work in favor of a tech corp like facebook as they will see some users leave, some because they don't have the age required and some because they don't want to do the verification
> We'll try everything, it seems, other than holding parents accountable for what their children consume.
The way to keep kids from eating (yummy) lead-based paint chips was not holding parents accountable to what their kids ate, but banning lead-based paint.
This tired argument again. It doesn’t work. It’s like keeping your kid from buying alcohol but all their friends are allowed to buy it. The whole age demographic has to be locked out of the ecosystem.
Well, yes. If your friends can all go 'round to David's house, where David's parents hand each child a case of beer and send them on their way, any attempt by the other parents to prohibit underage drinking is going to be ineffective. But most parents don't do that. (I've actually never heard of it.) So social solutions involving parent consensus clearly do work here.
"But it's behavioural!" I hear you cry. "What's stopping children from going out, buying a cheap unlocked smartphone / visiting their public library / hacking the parental control system, and going on the internet anyway?" And that's an excellent objection! But, what's stopping children from playing in traffic?
9 replies →
The thing is, what are the parents to do beyond restricting things? You find out some creep has been talking to Junior; do you talk to your local police department, state agency, or to the feds?
We've never properly acted upon reports of predators grooming children by investigating them, charging them, holding trials, and handing down sentences on any sort of large scale. There's a patchwork of LEOs that have to handle things and they have to do it right. Once the packets are sent over state lines, we have to involve the feds, and that's another layer.
Previously, I would have said it's up to platforms like Discord to organize internal resources to make sure that the proper authorities received reports, because it felt like there were instances of people being reported and nothing happening on the platform's side. Now, given recent developments, I'm not sure we can count upon authorities to actually do the job.
Back in the day you would beat up that person.
> The thing is, what are the parents to do beyond restricting things?
Well, I can't speak for parents (as in all parents). I can, however, tell you what we did.
When two of my kids were young we gave them iPods. The idea was to load a few fun educational applications (I had written and published around 10 at the time). Very soon they asked for Clash of Clans to play for a couple of hours on Saturdays. We said that was OK provided they stuck to that rule.
Fast forward to maybe a couple of months later. After repeated warnings that they were not sticking to the plan and promises to do so, I found them playing CoC under the blankets at 11 PM, when they were supposed to be sleeping and had school the next day.
I did not react and gave no indication of having witnessed that.
A couple of days later I asked each of them to their room and asked them to place their top ten favorite toys on the floor.
I then produced a pair of huge garbage bags and we put the toys in them, one bag for each of the kids.
I also asked for their iPods.
No anger, no scolding, just a conversation at a normal tone.
I asked them to grab the bags and follow me.
We went outside, I opened the garbage bin and told them to throw away their toys. It got emotional very quickly. I also gave them the iPods and told them to toss them into the bin.
After the crying subsided I explained that trust is one of the most delicate things in the world and that this was a consequence of them attempting to deceive us by secretly playing CoC when they knew the rules. This was followed by daily talks around the dinner table to explain just how harmful and addictive this stuff could be, how it made them behave and how important it was to honor promises.
Another week later I asked them to come into the garage with me and showed them that I had rescued their favorite toys from the garbage bin. The iPods were gone forever. And now there was a new rule: They could earn one toy per month by bringing top grades from school, helping around the house, keeping their rooms clean and organized and, in general, being well behaved.
That was followed by ten months of absolutely perfect kids learning about earning something they cherished every month. Of course, the behavior and dedication to their school work persisted well beyond having earned their last toy. Lots of talks, going out to do things and positive feedback of course.
They never got the iPods back. They never got social media accounts. They did not get smart phones until much older.
To this day, now well into university, they thank me for having taken away their iPods.
So, again, I don't know about parents in the aggregate, but I don't think being a good parent is difficult.
You are not there to be an all-enabling friend, you are there to guide a new human through life and into adulthood. You are there to teach them everything and, as I still tell them all the time, aim for them to be better than you.
https://www.youtube.com/watch?v=99j0zLuNhi8
8 replies →
its like the food industry blaming parents, sugar like apps/games are designed to be addictive to the point they are act like a drug, stop the drug dealer, not the consumer.
Blaming parents is a bit unwarranted, when on the other end we have business interests driven by perverse incentives of predating on children’s gullibility for their own profit.
When you say “We‘ll try everything” that is simply not true, in particular what we do not try is strict consumer protection laws which prohibits targeting children. Europe used to have such laws in the 1980s and the 1990s, but by the mid-1990s authorities had all but stopped enforcing them.
We have tried consumer protection, and we know it works, but we are not trying it now. And I think there is exactly one reason for that, the tech lobby has an outsized influence on western legislators and regulators, and the tech industry does not want to be regulated.
It is literally the parents responsibility. You want to blame someone else. Raising a kid doesn't mean letting society raise them you have to make tough choices.
If parents can't handle that they can give them up to the state.
7 replies →
> We'll try everything, it seems, other than holding parents accountable for what their children consume.
You've missed the point. No legislator or politician cares about what the parents are doing.
What they care about is gaining greater control of people's data to then coerce them endlessly (with the assitance of technology) into acting as they would liike. To do that, they need all that info.
"The children" is the sugar on the pill of de-anonymised internet.
[dead]
Ah, the abstinence theory of protection. How it continues to rear its ugly head.
Why this utter drivel is the top comment is beyond me, unbelievable.
That is not what the post you are replying to is advocating for at all - try reading it one more time without so much hostility
Can you offer some rebuttal to give some credence to your point?
A physical realm that is safe for children to explore in their own is clearly preferable to one where it’s transgressive to let a child go outside without an escort.
It is plausible that the same applies to the digital realm.
Looking at actual data regarding Australia's landmark legislation setting a minimum age of 16 for social media access with enforcement starting on December 10, 2025 indicates weakened data protection. The Australian data suggests that while the legislation has successfully cleared the decks of millions of underage accounts (4.7 million account deactivations together with increased VPN usage and "ghost" accounts to bypass restrictions), it has simultaneously forced platforms to rely on third-party identity vendors, with the following failures so far:
1) Persona (Identity Vendor) Exposure (Feb 20, 2026): researchers discovered an exposed frontend belonging to Persona, an identity verification vendor used by platforms like Discord. This system was performing over 260 distinct checks, including facial recognition and "adverse media" screening, raising massive concerns about the scope creep of age verification.
2) Victorian Department of Education (Jan 2026): a breach impacting all 1,700 government schools exposed student names and encrypted passwords. This is a primary example of how child-related data remains a high-value target.
3) Prosura Data Breach (Jan 4, 2026): this financial services firm suffered a breach of 300,000 customer records.
4) University of Sydney (Dec 2025): a code library breach affected 27,000 people right as the new legislation was rolling out.
I was hoping for more on "... The only way to prove that you checked is to keep the data indefinitely." What do the laws say on this? What data is this? I would have assumed that just like a bouncer can check my ID and hand it back to me, a digital system can verify my identity and not hold onto everything (e.g. the actual photo of my ID).
If we're going to do this at all, it should be on the device, not the website/app. Parents flag their child's device or browser as under 18, and websites/apps follow suit. Parents get the control they're looking for, while service providers don't have to verify or store IDs. I guess it's just more difficult to pressure big dogs like google/apple/mozilla for this than pornhub and discord.
I’ve wondered if a age verification gig worker app could ever be viable: have people you can meet in person to prove your age without ever uploading any PII anywhere. Then issue a private key proving you are who you say you are.
Yeah vchip style with ratings, with a setting to hide unrated sites. A simple header. Done. Have all the browsers/os support it - easy peasy.
This sounds pretty reasonable to me. What am I missing?
It doesn’t come with a ton of PII you can sell to data brokers.
There are alternatives to ID verification if the goal is protecting children.
You could, for example, make it illegal to target children with targeted advertising campaigns and addictive content. Then throw the executives who authorized such programs in jail. Punish the people causing the harm.
If targeting children with advertising got corporate execs thrown in jail, wouldn't the companies just roll out age verification for users like they do now? How would this rule change their behavior? They have to know who the children are to not target them.
Stronger punishment creates more of an incentive to age verify. Which is basically why it's happening now.
> They have to know who the children are to not target them.
There is a difference between identifying specific children, and running programs that target children more generally; and / or having research that shows how your product harms children, and failing to do anything to stop it. We can tackle both of those issues without requiring age verification. We're headed down the path of age verification because we know now that not only is social media harmful, it's especially harmful to kids, and has been specifically targeted to them. Those are things that can be fixed, regardless of how you feel about age verification. Its not different than tobacco being not allowed to create advertisements for kids; its the same type of people doing the same types of things in the end.
At least then it wouldn't be the government requiring it, is what people may think I imagine.
1 reply →
How about instead of protecting the kids we protect everyone and do something to stop these gargantuan companies from driving engagement at any cost in order to rake ad revenue.
I’ve never met a single person who believed Facebook was a force for good. Why allow it to exist in its current form at all.
If these companies are the new town square then make a real online town square run by the government people can use if they so choose and then break up the monopolies that are destroying the fabric of all the societies on the globe.
We never in a million years should have allowed these companies to establish global scale communications platforms without the ability to properly moderate them with human intervention and that’s not even to speak of the actual nefarious intent they possess to drive engagement and the anti democratic techno feudalist sickening shit we see from their CEOs like Thiel et al.
They are a plague on our species that makes sport betting apps look like childs play.
To avoid your proposed punishment, they will implement things like ... ID verification.
Easy solution: Ban all targeted advertising, regardless of age.
1 reply →
Then ones who won't will become the preferred choice
Facebook advertises outright scams and nobody manages to punish them for that.
It isn't about the kids, they are just the smoke and mirrors.
>Then throw the executives who authorized such programs in jail.
Gee, I wonder if the executives who are suspected of doing such things haven't spent the last 100 years building the infrastructure necessary to avoid charges, let alone jail time? Large corporate legal departments, wink-wink-nudge-nudge command and control hierarchies where nothing incriminating is ever put into writing, voluminous intra-office communications that bury even the circumstantial evidence so deeply no jury could understand it even if the plaintiffs/state could uncover it, etc.
Anyone over the age of 12 that thinks corporate entities can be made to be accountable in a meaningful way is more than naive. They are cognitively defective. Or is it that you realize they can't be held accountable but you'd rather maintain the status quo than contemplate a country which abolished them and enforced that all business was the conducted by sole proprietorships and (small-n) partnerships?
For a while it was thought that we could never bring back anti-trust.
Sure, there's a lot of corruption right now. Doesn't have to stay that way.
1 reply →
The purpose of a system is what it does.
Undermining data protection and privacy is clearly the point. The fact that it's happening everywhere at the same time makes it look to me like a bunch of leaders got together and decided that online anonymity is a problem.
It's not like kids having access to adult content is a new problem after all. Every western government just decided that we should do something about it at roughly the same time after decades of indifference.
The "age verification" story is casus belli. This is about ID, political dissent, and fears of people being exposed to the wrong brand of propaganda.
You are missing the profit driven angle. Age verification companies and their lobbyists are pouring massive amounts of resources into lobbying for mandatory age verification. And the reason why can be pretty simple. They get richer of off violating people's privacy, especially when those privacy violations are legally required.
Exactly. So many comments here about technical solutions are missing the underlying government/authority problem, or are actively a part of it.
> The purpose of a system is what it does.
How far does it go? Are all bugs features? Shall we assume that Boeing (via MCAS) and Ford (via the Pinto) were trying to kill their passengers? There's a difference between ulterior motive and incompetent execution of expressed intention.
> The fact that it's happening everywhere at the same time makes it look to me like a bunch of leaders got together and decided that online anonymity is a problem.
"People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public." - Adam Smith
No, this is proven false by reducing this theory to the individual level. Anyone who has tried to design/imagine -> actually build something, be it an artist, architect, song writer, programmer, or otherwise, knows there is inevitably a gap between design and realization. No one involved in that process would at any point consider the gap to be part of its “purpose”.
People do hide their intentions but that doesn’t give us a license to reduce complex system dynamics to absurdities.
Bugs get fixed when systems are iterated on. They also tend to be single results from single mistakes, not compound end results of the implementation.
Design features tend to persist.
The phrase/idiom "the purpose of a system is what it does" maps best to situations where a multiple decisions within a system make little sense when viewed through the lens of the stated purpose, but make perfect sense if the actual outcome is the desired one.
It is an invitation to analyze a system while suspending the assumption of good faith on the part of the implementors.
1 reply →
There is missing a solution.
Give our personal devices have the ability to verify our age and identity securely and store on device like they do our fingerprint or face data.
Services that need access only verify it cryptographically. So my iPhone can confirm I’m over 21 for my DoorDash app in the same way it stores my biometric data.
The challenge here is the adoption of these encryption services and whether companies can rely on devices for that for compliance without having to cut off service for those without it set up.
The real problem with this is that the ultimate objective isn't age verification, it's complete de-anonymization. I think different groups want this for different reasons, but the simple reality is that minimizing the identify information transferred around is antithetical to their goals.
If we create age verification tools with strong privacy protections that solve the problems they raise, we can can call their bluff.
If we fight every and any solution, we may end up with their solution, becauase they build it. We end up in the position of saying "don't use the thing they built" without offering alternatives. I'd rather be saying "use whatbwe built, ita is better."
Is it though? Do you have any proof that is the case?
Google/Apple already know where you and your mistress live. In case you pay for any service, they've got your identity too. Ever had a single shipment confirmation to your address come to your mail? They know who you are.
The hardware providers already have the information. You only need to make them reveal it to 3rd parties.
1 reply →
I think this is what my German electronic ID card does. The card connects to an app on my phone via NFC, a service can cryptographically verify a claim about my age, and no additional info is leaked to the service provider or the government.
But that doesn't verify that the person using the ID is the person that it was issued to.
2 replies →
I think this is actually the correct way to move forward.
We should be able to verify facts about people on the internet without compromising personal data. Giving platforms the ability to select specific demographics will, in my view, make the web a better place. It doesn’t just let us age restrict certain platforms, but can also make them more authentic. I think it’s really important to be able to know some things to be true about users, simply to avoid foreign election interference via trolling, preventing scams and so much more.
With this, enforcement would also be increasingly easy: Platforms just have to prove that they’re using this method, e.g. via audit.
ISO/IEC 18013-5 (Personal identification — ISO-compliant driving licence — Part 5: Mobile driving licence (mDL) application) is a potential solution for this. https://www.iso.org/obp/ui/en/#iso:std:iso-iec:18013:-5:ed-1...
It would allow someone with an mDL on their device to present only their age instead of other identifying information.
Yes. I think it’s this + policy.
And that your iPhone and other devices become more restrictive if this is not implemented.
I presume most devices in the world do not have a solution to this (desktop windows computer for instance).
I’m not sure if it’s a good idea for like every porn website in the world to require a secret enclave to work. But this sounds better to me than storing users photo ids in an s3 bucket
This kind of solution couldn't come soon enough. It will also enable us to verify humanity and have bot-free spaces on social media again. I wrote about this here: https://blog.picheta.me/post/the-future-of-social-media-is-h...
The solution has always been there: Assume everybody is an adult.
The only reasonable way to deal with children on the Internet is to treat Internet access like access to alcohol/drugs. There is no need for children to access the Internet full stop.
Internet is a network in which everything can connect to everything, and every connected machine can run clients, servers, p2p nodes and what not. Controlling every possible endpoint your child might connect to is not feasible. Shutting the entire network down because "won't somebody please think of the children" is not acceptable.
And, don't let them trick you. This is the endgoal. An unprecedented level of control over the flow of information.
So you would deny children the greatest source of knowledge in the history? I have learned math and programming thanks to unlimited access to the web and would not be where I am without it.
3 replies →
> And the only way to prove that you checked is to keep the data indefinitely.
This is a false premise already; the company can check the age (or have a third party like iDIN [0] do it), then set a marker "this person is 18+" and "we verified it using this method at this date". That should be enough.
[0] https://www.idin.nl/en/
Doesn't matter, I've already had to provably identify myself, the information is a) out there b) will be used and stored, and c) will be abused
and there is nothing I or the few (in terms of power) well-meaning government and corporate actors can do to change that.
And how do they prove to me they (and no 3rd party providers) aren't actually storing the data? I simply don't trust companies telling me they won't store something, so to me the only acceptable option is the data to never leave my device.
If third party does verification with ZKP, you only need to trust that third party. The company that requires verification will not have any data to store.
Nope, as the article notes, it is actually almost never enough because it does not stand up to legal scrutiny. And for good reason: there's no way to conclusively prove that the platform actually verified the user's age, as opposed to simply saying they did, before letting them in.
Most of this debate makes more sense if the actual goal is liability reduction, not child safety. If it were genuinely about protecting kids, you'd regulate infinite scroll and algorithmic engagement optimization, not who can log in.
If the US really cared about child safety they'd go after people in the epstien files.
As we can see from this user who has fallen ill with Epstein Brain, the real victims of social media algorithms are actually adults.
He is currently prepping to overthrow his local Pizzeria while the rest of us argue as if social media even exists anymore (it doesn't, it's just algorithmic TV now).
2 replies →
Most people have only a light grasp of what infinite scroll and algorithmic engagement optimization means. They know they like the scrolling apps more, but it takes a bit of research and education to really understand the specific mechanics and alternatives. We get this well as tech literate but many people using these apps today, are neither tech literate, nor even remember a world before infinite scrolling media was a thing. It seems incredibly obvious mechanism but I've explained it to people, and it takes a few times for it to really sink in and become a specific mental model for how they see the world.
I'm happy they don't because they don't know what they're doing. Hopefully countries prioritizing public health will implement a social media ban for the vulnerable population which gives them some time to grow up without all that garbage poisoning their brains. Then when they're 16 or whatever age, hopefully by that age we'll have realized that this is actually like cigarettes and everyone, all age groups treat it like that.
Better than muddying the waters trying to make it less addictive but then letting them on there when their brains aren't ready.
If the concern is time-wasting, even having upvotes or likes and sorting on them is plenty engaging. I spent thousands of hours as a teenager on Reddit, HN, and the old blue Facebook chronological feed.
I think it's because there's always a group of nosy busybodies finger-wagging about protecting the children and we have to do decorative theatrics to satiate whatever narratives they've convinced themselves of
This is a group particularly beloved by politicians, because you can pretty much use them as a smokescreen whenever you want to pass authoritarian legislation...
I bet that the Chat Control lobbyist groups are involved to some degree, as they tried to require age verification in Chat Control before it was shot down. They probably haven't given up after that defeat.
Pretty sure they’re doing both of those things but it takes a long time for the regulation to reach the final stage
Interestingly, regulating these would be good for adults as well. A lot of these very large online companies enjoy an asymmetric power advantage. We should aim to protect ourselves against them, in addition to our children.
I think this would be good for everybody, not just kids. It doesn't even have to be complicated: Just that after a certain amount of time scrolling/watching, put in a message asking if it's maybe time to stop with some information about how these algorithms try to keep you for as long as possible. Maybe a link to a government page with more information.
It doesn't have to be perfect and there will of course be easy workarounds to hid the warnings for people that want. The goal is to improve the situation though, not solve it perfectly. Like putting information about the dangers of smoking on packages of smokes; it doesn't stop people from smoking but it does make the danger very easy to learn.
Some platforms did display notices like this for a while, but I noticed that they stopped the practice after a few months.
Big tech likes this because there are a lot more face recognition technologies in the wild in real life and being able to connect all real life data to online data is quite valuable. It's also quite possibly the largest training set ever for face recognition if ids are stored and given how ids and images are sold across many companies it seems very high probability that some company will retain the data rather than delete after use.
China (and US via latin american countries and it's own poor people ...via benefit programs access via id.gov) is testing both biometrics and device id to evaluate pros and cons, and to merge data, when it come to autocratic control.
In china there are places to scan you device and get coupons. usually at elevators in residential buildings so they can track also if you're arriving or leaving easily.
In the US every store tracks and report to ad networks your Bluetooth ids. and we know what happens to ad networks.
US now requires cars to report data, which was optional before (e.g. onstar) and china joined on this since the ev boom.
the public id space is booming.
> US now requires cars to report data, which was optional before (e.g. onstar) and china joined on this since the ev boom.
This isn't true, there is no federal requirement for a cellular modem in cars. Most modern cars have one, but nothing prevents you from disabling or removing it. I certainly would not tolerate such a "bug" in by car.
> In the US every store tracks and report to ad networks your Bluetooth ids.
This also isn't true, modern phones randomize Bluetooth identifiers. I personally disable Bluetooth completely.
2 replies →
So don't use big tech. No one needs discord, or porn, or social media. But this is not the answer. The answer is fighting to change the laws. And we can start changing the laws by boycotting big tech. Laws are changed by money flows, not ideology.
Even if you design the perfect system, kids will just ask parents for an unlocked account, many parents will accept, myself included. My kids have full access to the internet and I never used parental control, I talk to them. Of course, I don't want to give parenting advice, that would be presumptuous. But, my point is that a motivated kid will find a way, you have to "work" on that motivation.
Many of the worst present on the internet is not age gated at all, you have millions of porn websites without even a "are you over 18" popup. There are plethora of toxic forums...
Of course it's a complex problem, but the current approach sacrifice a lot of what made the internet possible and I don't like it.
The solution is education. The most well adjusted kids I've seen are told flat out about the risks they'll face and, in general, helped to understand there are break points where things get too serious for them to try to deal with on their own.
I think that if you block all porn, social media, etc. all that does is create an opportunity for kids to be shifted to platforms controlled by bad actors. Adults fall victim to pig butchering schemes where they're given 100% fake investment apps that look completely real and they don't realize they're getting scammed until they try to get their money out of the system. There was a story in Canada about a guy and his daughter that thought they had $1 million in savings and it was a pig butchering scam with a fake app.
Are kids today equipped to deal with that? What happens when someone tells a kid to get app XYZ because it's un-moderated, but that app is controlled by a bad actor? Imagine a Snapchat like platform promising ephemeral messaging with simple username / password on-boarding so parents don't see account creation emails, but the app is run by organized crime.
I don't even know how you handle it if they manage to normalize the idea of children sending ID to random platforms. In addition to getting platform shifted and exploited, kids will be vulnerable to sending their real ID to bad actors.
The whole thing seems insane to me. Spend some money on education. That's the only long-term option.
> Many of the worst present on the internet is not age gated at all, you have millions of porn websites without even a "are you over 18" popup. There are plethora of toxic forums...
This is what I find most insane about the UK's age verification law. It's literally so easy to find adult content without proving your age... You can literally just type in "naked women" into a search engine and get porn...
To call it ineffective would be an understatement. Finding adult content on the web almost just as easy as it's always been. The only thing it's made harder is accessing adult content from the normie-web – you can't access porn on places like Reddit anymore, but you can access porn on 4chan and other dodgy adult sites.
If the argument is "think about the kids" there are more effective ways to do it... Requiring device-level filtering for example would likely be more effective because it could just blacklist domains with hosting adult content unless unrestricted. It would also put more power in the parents hands about what is and what isn't restrict.
>Some observers present privacy-preserving age proofs involving a third party, such as the government, as a solution, but they inherit the same structural flaw: many users who are legally old enough to use a platform do not have government ID.
So there is absolutely no way to change that and give out IDs from the age of 14? You can already get an ID for children in Germany https://www.germany.info/us-de/service/reisepass-und-persona...
This is a problem that has to be solved by the government and not by private tech companies.
This is a lazy cop out to say "we have tried nothing and we are all out of ideas"
I'm not convinced age restrictions like this are a good idea. But yeah, the non-availability of IDs in the US is a self-inflicted problem.
Another example where this plays a role are voter registration and ID requirements for voting in the US. It is entirely bizarre to me how these discussions just accept it as a law of nature that it's expensive and a lot of effort to get an ID. This is something that could be changed.
You may underestimate the levels of classism and racism in the US. Go on and bring up a conversation about it and you'll eventually get someone talking about how that would be socialism and we can't do that.
When one of the only two political parties does not want everyone to vote (cause they’d lose every election) you get what we got…
The problem is not that we aren’t doing age verification, it’s that a group of authoritarians are trying to force mandatory implementation of age verification (and concomitant removal of anonymity). That’s the problem.
It seems like the solution is to provide an age verification mechanism with robust privacy protections. That way when we offer a solution that works for all of their states concerns, if they disagree with the privacy preserving approach we force them to say outright "I want to keep a record of every website you visit."
5 replies →
Anonymity is a myth. I am sure by now an LLM can figure out who you are and where you live by your HN posts alone.
2 replies →
> This is a problem [..]
(This is a genuine question) please could you describe the underlying problem that age verification is attempting to solve?
Not my point in the comment but my personal opinion:
To regulate access to addicting material. This is done in the physical world - why should digital be lawless when it applies to the same human behaviors?
I've been addicted to a lot of digital media parts in harmful ways and I had the luck and support to grow out of most of it. A lot of people are not that lucky.
I don't think that's what the original comment was discussing at all...
If governments want to require private companies to verify ages, those same governments need to provide accessible ways for their citizens to get verification documents, starting from the same age that is required.
What problem? I don't think internet websites and apps actually need to know the face, age, or name of their users if their users don't want to provide that information. With exceptions for things like gambling websites.
Why should gambling be the exception? One could argue other app-based vices are just as bad, if not worse.
3 replies →
German immigrants can’t get an ID, sometimes for years
I strongly oppose any form of "age verification" involving uploading your ID. That's just asking for a data breach.
There are options that don't involve any ID uploads whatsoever.
That's not what this user was talking about.
For example, with a German ID you can provide proof that you are older than 18 without giving up any identifying information. I mean, nobody uses this system at the moment, but it does exist and it works.
1 reply →
It costs money. Getting an ID here costs about 5% of minimum wage if you order it online + travel (you still have to travel there for the photos and pickup). It costs even more if you apply in person.
You could buy 19 gallons of milk for that money (80 liters).
So do you buy an ID every month or can we depreciate that over 15 years?
3 replies →
Really more so than money is the amount of time. Sitting at the DMV for half a day, and that is with an appointment, really sucks.
> So there is absolutely no way to change that and give out IDs from the age of 14?
If that happened in the US, Republicans would then:
1. Insist that non-white children carry ID at all times
2. Operationalize DHS and ICE to deport non-white children to foreign concentration camps.
Exactly right. Also, better to be overly restrictive here given the well documented harms of social media on young minds. If the law stipulates that you must be 15 to obtain social media access, and most people don't get their IDs until 18, then most people will stay off social media for another three years: no big deal.
We've spent 30 years telling people "don't share personal info online" and now the compliance path is "upload your ID or face so you can browse memes"
Non of which is necessary to verify you crossed age threshold. Websites are just lazy, maybe on purpose. Accepting this kind of low effort age verification would be foolish.
They could just issue a x509 cert with an above 18 attribute and nothing else.
You're not 18 yet?
No Problem we just give you two certs with different valid from/to ranges that overlap and don't give away your birthday.
Problem solved.
Steelmanning the opposing position: one adult shares their certificate with every child on the planet.
Because it has no attributes (not even a unique serial number that could be used to track it) the whole scheme is now defeated.
Which, adult is merely someone who's just turned 18. What're the chances that any of them just post the thing up on 4chan? (I'm going to go with 100% chance of that happening.)
I worked for a decade in what I would consider the highest level of our kids' privacy ever designed, at PBS KIDS. This was coming off a startup that attempted to do the same for grownups, but failed because of dirty money.
Every security attempt becomes a facade or veil in time, unless it's nothing. Capture nothing, keep nothing, say nothing. Kids are smart AF and will outlearn you faster than you can think. Don't even try to capture PII ever. Watch the waves and follow their flow, make things for them to learn from but be extremely careful how you let the grownups in, and do it in pairs, never alone.
I would like to take the discussion in the other direction. How about we offer safe spaces instead of banning the unsafe spaces for kids.
Similar to how there is specific channels for children on the TV. Perhaps the government can even incentivize such channels. It would also make it easier for parents to monitor and set boundaries. Parents would only need to monitor if the tv is still tuned to disney channel or similar instead of some adult channels.
Similarly this kind of method could be applied to online spaces. Ofcourse there will be some kids that will find ways around it but they will most likely be outliers.
>How about we offer safe spaces instead of banning the unsafe spaces for kids.
Children shouldn't be associating with other children, except in small groups. Even the typical classroom count is far too large. They become the nastiest, most horrible versions of themselves when they congregate. A good 90% of the pathology of public schools can be blamed on the fact that, by definition, public schools require large numbers of children to congregate.
I have no idea where this idea that Internet is toxic to children is coming from. Is that some type of moral panic? Weren't most of you guys children/adolescents during the 2000's?
Are you saying that social media isn't harmful to children?
This is like rhetorically asking, "Are you saying that doom and marylin manson aren't harmful to children?"
The problem with social media isn't the inherent mixing of children and technology, as if web browsers and phones have some action-at-a-distance force that undermines society; it's the 20 years or so they spent weaponizing their products into an infinite Skinner box. Duck walk Zuckerburg.
This is all assuming good faith interest in "the children," which we cannot assume when what government will gain from this is a total, global surveillance state.
Last time I checked there's no scientific consensus if social media causes harm at all. The best studies found null or very small effects. So yeah, I am skeptical it is harmful.
Why is no one talking about using zero knowledge proofs for solving this? Instead of every platform verifying all its users itself (and storing PII on its own servers), a small number of providers could expose an API which provides proof of verification. I'm not sure if some kind of machine vision algorithm could be used in combination with zero-knowledge technology to prevent even that party from storing original documents, but I don't see why not. The companies implementing these measures really seem to be just phoning it in from a privacy perspective.
People are talking about it, at least here anyway.
The reason you don’t see it in policy discussion from the officials pushing these laws is because removal of anonymity is the point. It’s nit about protecting kids, it never was. It’s about surveillance and a chilling effect on speech.
You do see it in policy discussions from officials in the EU. You probably don't see it in policy discussions in the US because the groups that should be telling US officials how to do age verification without giving up anonymity are not doing so.
Zero knowledge proofs are only private and anonymous in theory, and require you to blindly trust a third party. In practice the implementation is not anonymous or private.
What third party do they require you to blindly trust?
Technologists engage in an understandable, but ultimately harmful behavior: when they don't want outcome X, they deny that the technology T(X) works. Consider key escrow, DRM, and durable watermarking alongside age verification. They've all been called cryptographically impossible, but they're not. It's just socially obligatory to pretend they can't be done. And what happens when you create an environment in which the best are under a social taboo against working on certain technologies? Do you think that these technologies stop existing?
LOL.
Of course these technologies keep existing, and you end up with the worst, most wretched people implementing them, and we're all worse off. Concretely, few people are working on ZKPs for age verification because the hive mind of "good people" who know what ZKPs are make working on age verification social anathema.
It's worse than that. Money will continue to flow in to companies like Persona, becauae there is no alternative. Persona will use that money to continue to build profiles on people.
Society would be better off paying cryptography researchers and engineers at NIST.
Parents are competing with multi-trillion dollar companies who have invested untold amounts of cash and resources into making their content addictive. When parents try to help their children, it's an uphill battle -- every platform that has kids on it also tends to have porn, or violence, or other things, as these platform generally have disappointingly ineffective moderation. Most parents turn to age verification because it's the only way they can think of to compete with the likes of Meta or ByteDance, but the issue is that these platforms shouldn't have this content to begin with. Platforms should be smaller -- the same site shouldn't be serving both pornography and my school district's announcement page and my friend's travel pictures. Large platforms are turning their unwillingness to moderate into legal and privacy issues, when in fact it should simply be a matter of "These platforms have adult content, and these ones don't". Then, parents can much more easily ban specific platforms and topics. Right now there's no levers to pull or adjust, and parent s have their hands tied. You can't take kids of Instagram or TikTok -- they will lose their friends. I hate the fact that the "keep up with my extended family" platform is the same as the "brainrot and addiction" one. The platforms need to be small enough that parents actually have choices on what to let in and what not to. Until either platforms are broken up via. antitrust or until the burden of moderation is on the company, we're going to keep getting privacy-infringing solutions.
If you support privacy, you should support antitrust, else we're going to be seeing these same bills again and again and again until parents can effectively protect their children.
Age checks sound simple, but they tend to turn into “please create a permanent ID for the internet.” I’d love a version that’s more like a one-time wristband than a loyalty card.
mandatory loyalty card (won't sell you bread if you don't present it) with additional database of your extra-shop activities
Age verification is a tough problem.
I ran into this when building a kids' education app a few years ago. We explored a bunch of options, from asking for the last four digits of their parents' SSN (which felt icky, even though it's just a partial number) to knowledge-based authentication (like security questions, but for parents).
Ultimately, we went with a COPPA-compliant verification service, but it added friction to the signup process.
It's a trade-off between security and user experience, and there's no perfect solution, unfortunately.
We are missing accessible cryptographic infrastructure for human identity verification.
For age verification specifically, the only information that services need proof of is that the users age is above a certain threshold. i.e. that the user is 14 years or older. But in order to make this determination, we see services asking for government ID (which many 14-year-olds do not have), or for invasive face scans. These methods provide far more data than necessary.
What the service needs to "prove" in this case is three things:
1. that the user meets the age predicate
2. that the identity used to meet the age predicate is validated by some authority
3. that the identity is not being reused across many accounts
All the technologies exist for this, we just haven't put them together usefully. Zero knowledge proofs, like Groth16 or STARKs allow for statements about data to be validated externally without revealing the data itself. These are difficult for engineers to use, let alone consumers. Big opportunity for someone to build an authority here.
>We are missing accessible cryptographic infrastructure for human identity verification.
like most proposed solutions, this just seems overcomplicated. we don't need "accessible cryptographic infrastructure for human identity". society has had age-restricted products forever. just piggy-back on that infrastructure.
1) government makes a database of valid "over 18" unique identifiers (UUIDs)
2) government provides tokens with a unique identifier on it to various stores that already sell age-restricted products (e.g. gas stations, liquor stores)
3) people buy a token from the store, only having to show their ID to the store clerk that they already show their ID to for smokes (no peter thiel required)
4) website accepts the token and queries the government database and sees "yep, over 18"
easy. all the laws are in place already. all the infrastructure is in place. no need for fancy zero-knowledge proofs or on-device whatevers.
What you’re describing is infrastructure that doesn’t necessarily exist right now for use online, and has all the privacy problems described. Why should I have to share more than required?
22 replies →
The government will want some way to uncover who bought the token. They'll probably require the store to record the ID and pretend like since it's a private entity doing it, that it isn't a 4A violation. Then as soon as the token is used for something illegal they'll follow the chain of custody of the token and find out who bought it.
No matter what the actual mechanism is, I guarantee they will insist on something like that.
4 replies →
A significant obstacle to adoption is that cryptographic research aims for a perfect system that overshadows simpler, less private approaches. For instance, it does not seem that one should really need unlinkability across sessions. If that's the case, a simple range proof for a commitment encoding the birth year is sufficient to prove eligibility for age, where the commitment is static and signed by a trusted third party to actually encode the correct year.
I agree. I've been researching a lot of this tech lately as a part of a C2PA / content authenticity project and it's clear that the math are outrunning practicality in a lot of cases.
As it is we're seeing companies capture IDs and face scans and it's incredibly invasive relative to the need - "prove your birth year is in range". Getting hung up on unlinkable sessions is missing the forest for the trees.
At this point I think the challenge has less to do with the crypto primitives and more to do with building infrastructure that hides 100% of the complexity of identity validation from users. My state already has a gov't ID that can be added to an apple wallet. Extending that to support proofs about identity without requiring users to unmask huge amounts of personal information would be valuable in its own right.
https://xkcd.com/538/
Your crypto nerd dream is vulnerable to the fact that someone under 18 can just ask someone over 18 to make an account for them. All age verification is broken in this way.
There is a similar problem for people using apps like Ubereats to work illegally by buying an account from someone else. However much verification you put in, you don't know who is pressing the buttons on the screen unless you make the process very invasive.
You seem to have missed requirement #3 -> tracking and identifying reuse.
An 18-year-old creating an account for a 12-year-old is a legal issue, not a service provider issue. How does a gas station keep a 21-year-old from buying beer for a bunch of high school students? Generally they don't, because that's the cops' job. But if they have knowledge that the 21-yo is buying booze for children, they deny custom to the 21-yo. This is simple.
1 reply →
Even if the problem is perfectly solved to anonymize the ID linked to the age, you still have the issue that you need an ID to exercise your first amendment right. 1A applies to all people, not just citizens, and it's considered racist in a large part of the US to force someone to possess an ID to prove you are a citizen (to vote) let alone a person (who is >= 18y/o) w/ 1A rights.
You are missing the point.
They don't care whether you are 14 or not. They want your biometrics and identification. "Think of the children" is just a pretense.
In general, any government already has your information, and it's naive to think that they don't; if you pay taxes, have ever had a passport, etc. they already have all identifying information that they could need. For services, or for the government knowing what you do (which services you visit), then a zero-knowledge proof would work in this case.
The companies don’t, and th government already has your government id.
Here is an example of the problem with inference-based verification:
https://streamable.com/3tgc14
Zero-knowledge proofs exist, that verify that a user's id holds certain properties, without leaking said ID.
> "Social media is going the way of alcohol, gambling, and other social sins: societies are deciding it’s no longer kids’ stuff."
Oh, remember those good old times when alcohol was kids' stuff.......
In Italy it is common for 13 and 14 year olds to have a glass of wine with dinner. The sin is not drinking, it is gluttony.
> Some observers present privacy-preserving age proofs involving a third party, such as the government, as a solution, but they inherit the same structural flaw: many users who are legally old enough to use a platform do not have government ID. In countries where the minimum age for social media is lower than the age at which ID is issued, platforms face a choice between excluding lawful users and monitoring everyone. Right now, companies are making that choice quietly, after building systems and normalizing behavior that protects them from the greater legal risks. Age-restriction laws are not just about kids and screens. They are reshaping how identity, privacy, and access work on the Internet for everyone.
This rebuttal to privacy preserving approaches isn't compelling. Websites can split the difference and use privacy preserving techniques when available, and fall back to other methods when the user doesn't have an ID. I'd go further and say websites should be required to prioritize privacy preserving techniques where available.
There is a separate issue of improving access to government ID. I think that is important for reasons outside of age verification. Increasingly voting, banking, etc... already relies on having an ID.
They want to kill anonymity on the open web. This is why I am against that.
Does each service really need to collect this data from the user directly? They could instead have the user authorise them by e.g. OAuth2 to access their age with one of the de-facto online-identity-providers. I would be surprised if they didn't implement an API for this sometime soon, cause it would place them as the source of truth and give them unique access to that bit of user data. Seems like a chance and position they wouldn't want to lose.
I like the solution Tim Burners-Lee is working on. Lets hope he has some success.
https://solidproject.org/
https://www.theguardian.com/technology/2026/jan/29/internet-...
The purpose is to control the Internet. They've been trying this for ages. They tried with terrorism and other things. Now the excuse is protecting children.
Not exactly a good moment for this caste of politicians to pretend they care about children's well-being, though.
Social media, at least, are reaching the state the smoking had in the 80's when it started to be banned. After an era of praising (00's) scepticism followed (10's) until an underage ban comes (20's). Maybe in the 30's they will be banned in public spaces!
The problem of identifying a value for each person is very difficult. But government's role stops there. Until the teenager's screen more factors stay in the middle (parents, peers, criminals). I am curious how it turns out eventually. As a parent, I have already banned SM for my children, so not "affected" by the new policy.
Ban in public spaces might not make sense since there isn't a proximity hazard like smoke. But I can imagine after kids are banned from social media and we see how this was an obviously positive effect on their mental health, we will begin to acknowledge that this stuff is toxic for adults as well.
We could start to ban many of the mechanisms social media companies have deployed over the last 10 years. Infinite scrolling, algorithmic feeds of "creator" content, AI generated ragebait from bot accounts, etc. I'd love to see social media reverted back to when it was just holiday photos from your friends.
Age verification is one of the dumbest things humanity is wasting it's resources on.
From every political angle, the messaging seems to be "we want you to give birth to many kids, but we don't trust you raising them"
"Age-restriction laws push platforms toward intrusive verification systems that often directly conflict with modern data-privacy law" - when you make rules contradictory, someone always violate these laws, and you can use selective persecution to "convince" companies to favor you, the incumbent politician. You don't even have to use such power, just a "joke" may be enough to send have any rational CEO licking your shoes.
European proponents of "anti-big-tech action" make it pretty explicit - broad discretionary power should be given to executive branch, because otherwise "international corporations" will use "loopholes" (and these "loopholes" are, in practice, explicitly written laws used as intended).
Someone explain me like I'm 5: there are some solutions already in effect that are based on cryptographically generated, anonymous, one-time use tokens that allow to confirm adults's age without being tied up yo your ID. Why on earth even technically skilled people completely ignore those? Is this pure NIMBY ignorance or am missing something?
Because those solutions always have obvious flaws. If the cryptographic token is anonymous, how do you know the user verifying is the same one who generated the token? How do you know the same cryptographic key isn't verifying several accounts belonging to other people?
They are one time use by definition. You can't know they are used by respectful owner, but the idea is you have to provide a new token every few weeks/months. Much like when using other services nowadays, I mean even Gmail will have you authorize every few months even if you didn't log out. Plus you fine/prosecute those who sell/misuse theirs. Just like you prosecute adults who buy kids alcohol or other substances.
Obvious flaws are OK. I absolutely hate the Nirvana fallacy that you people think is acceptable here, while hundreds of millions of kids suffer from serious developmental issues, as reported left and right by all kinds of organizations and governments themselves.
It's kind of weird to me how every article on this topic here has people rushing to comment within a couple minutes with some generic "yes I too support ID checks for internet use!". Has the vibe really shifted so much among tech-literate people?
Although there is some organic support, there is a lot of coordinated astroturfing. It’s apparent if you watch the discussions across platforms, there are obvious shared talking points that come in waves.
Governments (and a few companies) really want this.
What are some links to HN comments that you (or anyone else) feel is "coordinated astroturfing"?
The site guidelines ask users to send those to us at hn@ycombinator.com rather than post about it in the threads, but we always look into such cases when people send them.
It almost invariably turns out to simply be that the community is divided on a topic, and this is usually demonstrable even from the public data (such as comment histories). However, we're not welded to that position—if the data change, we can too.
3 replies →
> Governments (and a few companies) really want this.
The cynic in me fears they don't want a privacy-preserving solution, which blinds them to 'who'. Because that would satisfy parents worried about their kids and many privacy conscious folks.
Rather, they want a blank check to blackmail or imprison only their opponents.
9 replies →
> It’s apparent if you watch the discussions across platforms, there are obvious shared talking points that come in waves
This is true of basically any issue discussed on the internet. Saying it must be astroturfing is reductive
> It’s apparent if you watch the discussions across platforms, there are obvious shared talking points that come in waves.
How do you know what is "shared talking points" vs "humans learning arguments from others" and simply echoing those? Unless you work at one of the social media platforms, isn't it short of impossible to know what exactly you're looking at?
1 reply →
> there is a lot of coordinated astroturfing.
Interesting. Are you saying all the concerns raised by the proponents of ID verification are invalid and meritless? For example,
1. Foreign influence campaigns
2. Domestic influence campaigns
3. Filtering age-appropriate content
I’m sure there are many other points with various degree of validity.
7 replies →
> there are obvious shared talking points that come in waves.
Groups of people who wake up at the same time of the day often have a tendency to be from a similar place, hold similar values and consume similar media.
Just because a bunch of people came to the same conclusion and have had their opinions coalesce around some common ideas, doesn't mean it's astroturfing. There's a noticeable difference between the opinions of HN USA and HN EU as the timezones shift.
More than a few companies. Nothing would allow advertisers to justify raising ad rates quite like being able to point out that their users are real rather than bots.
> there is a lot of coordinated astroturfing. It’s apparent if you watch the discussions across platforms, there are obvious shared talking points that come in waves.
Is that really evidence of astroturfing? If we're in the middle of an ongoing political debate, it doesn't seem that far fetched for me that people reach similar conclusions. What you're hearing then isn't "astro-turfing" but one coalition, of potentially many.
I often hear people terrified that the government will have a say on what they view online, while being just fine with google doing the same. You can agree or disagree with my assesment, but the point is that hearing that point a bunch doesn't mean it's google astroturfing. It just means there's an ideology out there that thinks it's different (and more opressive seemingly) when governments do it. It means all those people have a similar opinion, probably from reading the same blogs.
7 replies →
"A few"?
"Real" user verification is a wet dream to googlr, meta, etc. Its both a ad inflation and a competive roadblock.
The benefits are real: teens are being preyed upon and socially maligned. State actors and businesses alike are responsible.
The technology is not there nor are governments coordinating appropiate digital concerns. Unsurprising because no one trusts gov, but then implicitly trust business?
Yeah, so obviously, its implementation that will just move around harms.
I think we should be careful of writing off this sea change as simple professional influence campaigns. That kind of thinking is just what got Trump to the Whitehouse, and is currently getting the immigrants rounded up.
Things that didn't seem likely to have broad support previously, now are seen as acceptable. In the 90's no one could envision rounding up immigrants. No one could envision uploading an ID card to use ICQ. No one could envision the concept of DE-naturalization or getting rid of birthright citizenship.
Today, in the US for instance, there are entire new generations of people alive. And many, many people who were alive in the 90's are gone. Well these new people very much can envision these things. And they seem to have stocked the Supreme Court to make all these kinds of things a reality.
All because the rest of us keep dismissing all of this as just harmless extreme positions that no one in society really supports. We have to start fighting things like this with more than, "It's not real."
11 replies →
The industry clearly prefers a system in which using the internet requires full identification. There are many powerful interests that support this model:
- Governments benefit from easier monitoring and enforcement.
- The advertising industry prefers verified identities for better targeting.
- Social media companies gain more reliable data and engagement.
- Online shopping companies can reduce fraud and increase tracking.
- Many SaaS companies would also welcome stronger identity verification.
In short, anonymity is not very profitable, and governments often favor identification because it increases oversight and control.
Of course, this leads to political debate. Some point out that voting often does not require ID, while accessing online services does. The usual argument is that voting is a constitutional right. However, one could argue that access to the internet has become a fundamental part of modern life as well. It may not be explicitly written into the Constitution, but in practice it functions as an essential right in today’s society.
You are missing the age/ID verification tech companies, who profit from violating privacy here. They have a strong incentive to try and convince/trick governments into legally requiring their services.
When anonymity is outlawed, then only outlaws will be anonymous.
There’s some nuance here.
Realizing that much of the internet is totally toxic to children now and should have a means of keeping them out is distinct from agreeing to upload ID to everything.
A better implementation would be to have a device/login level parental control setting that passed age restriction signals via browsers and App Stores. This is both a simpler design and privacy friendly.
There already is a means to keep children away from toxic content: parents (possibly with the help of locally run content blocking software).
I like this take. Ultimately the only people responsible for what kids consume are the parents. It’s on them to control their kids’ internet access, the government has no place in it. If you want to punish someone for a child being exposed to inappropriate content, punish the negligence of the parents.
I've thought the same.
At least here in US: Google/Apple device controls allow app to request whether user meets age requirements. Not the actual age, just that the age is within the acceptable range. If so, let through, if not, can't proceed through door.
I know I am oversimplifying.
But I like this approach vs. uploading an ID to TikTok. Lesser of many evils?
This also means the only operating systems allowing access to the internet will be these with the immense surveillance and ad-infested.
1 reply →
It doesn't sound simple. Now there needs to be some kind of pipeline that can route a new kind of information from the OS (perhaps from a physical device) to the process, through the network, to the remote process. Every part of the system needs to be updated in order to support this new functionality.
3 replies →
now?
I think it's quite embarrassing that the WWW exists since more than 3 decades and still there's no mechanism for privacy friendly approval for adults apart from sending over the whole ID. Of course this is a huge failure of governments but probably also of W3C which rather suggests the 100,000th JavaScript API. Especially in times of ubiquitous SSO, passkeys etc. The even bigger problem is that the average person needs accounts at dozens if not hundreds of services for "normal" Internet usage.
That being said, this is a 1 bit information, adult in current legislation yes/no.
> and still there's no mechanism for privacy friendly approval for adults apart from sending over the whole ID. Of course this is a huge failure of governments but probably also of W3C
I consider it a huge success of the Internet architects that we were able to create a protocol and online culture resilient for over 3 decades to this legacy meatspace nonsense.
> That being said, this is a 1 bit information, adult in current legislation yes/no.
If that's all it would take to satisfy legislatures forever, and the implementation was left up to the browser (`return 1`) I'd be all for it. Unfortunately the political interests here want way more than that.
SSO and passkeys don't solve adult verification. I don't see how this problem is embarrassing for the www - it's a hard problem in a socially permissible way (eg privacy) that can successfully span cultures and governments. If you feel otherwise, then solutions welcome!
There's a high chance the government is attempting to influence public opinion by using botted comments, which is easier than ever to pull off.
Seems like these articles and the subsequent top replies saying
"use a token from the device so the ID never leaves, this is way better right!"
This is the true objective. They actually want DEVICE based ID.
I want LESS things that are tied to me financially and legally to be stolen when(not if) these services and my device are compromised.
This is what the LLMs are actually good for.
3 replies →
If you dig into the CEO of the The Age Verification Providers Association (AVPA), he has spent years going out of his way to spam pro-age verification propaganda across random sites like Techdirt and Michael Geist's blog.
Its not unreasonable to assume that he would seek to automate his bullshit.
See here for some examples:
https://www.techdirt.com/2022/08/26/who-would-benefit-from-c...
https://www.michaelgeist.ca/2025/10/senate-bill-would-grant-...
Unless you live in North Korea, no there is not. This is pure conspiracy theory.
1 reply →
There are better and there are really really bad ways to do ID checks. In a world that is increasingly overwhelmed by bots I don't see how we can avoid proof-of-humanity and proof-of-adulthood checks in a lot of contexts.
So we should probably get ahead of this debate and push for good ways to do part-of-identity-checks. Because I don't see any good way to avoid them.
We could potentially do ID checks that only show exactly what the receiver needs to know and nothing else.
> We could potentially do ID checks that only show exactly what the receiver needs to know and nothing else.
A stronger statement: we know how to build zero-knowledge proofs over government-issued identification, cf. https://zkpassport.id/
The services that use these proofs then need to implement that only one device can be logged in with a given identity at a time, plus some basic rate limiting on logins, and the problem is solved.
4 replies →
When ID checks are rolled out there is immediate outrage. Discord announced ID checks for some features a couple weeks ago and it has been a non-stop topic here.
From what I’ve seen, most of the pro-ID commenters are coming from positions where they assume ID checks will only apply to other people, not them. They want services they don’t use like TikTok and Facebook to become strict, but they have their own definitions of social media that exclude platforms they use like Discord and Hacker News. When the ID checks arrive and impact them they’re outraged.
Regulation for thee, not for me.
This is what I was wondering too. It doesn't seem genuine. Most people in tech I know will strongly oppose ID checks for internet use, rightfully so.
I think that not doing partial-identity checks invite bot noise into conversations. We could have id checks that only check exactly what needs to be checked. Are you human? Are you an adult? And then nothing else is known.
1 reply →
We have a Scylla vs Charybdis situation, where lack of ID leads to an internet of bots, while on the other end we get a dystopia where everything anyone has ever said about any topic is available to a not-so-liberal government. Back in the day, it was very clear that the second problem was far worse than the first. I still think it is, but I sure see arguments for how improved tooling, and more value in manipulating sentiment, makes the first one quite a bit worse than it was in, say, 1998.
> Back in the day, it was very clear that the second problem was far worse than the first.
This is still the case. The difference now is that the astroturfed bot accounts are pushing for fascism (I.E., the second problem).
It's very odd. I see it everywhere I go.
I think a lot of the younger generation supports it, actually. They didn't really grow up with a culture of internet anonymity and some degree of privacy.
The younger generation is growing up where the internet is a giant dumpster fire of enshitification that a tanker full of gasoline just got poured on in the form of AI chatbots. With agents becoming even easier the equivalent of script kiddies are going to make it so much worse.
Privacy with respect to the government was one of the final pillars, but when everything placed on the internet is absorbed by the alphabets of government agencies, and the current admin does searches of it as their leisure they understand nothing is anonymous anymore.
It's funny that this is what the younger generation is going to think Millennials and older are completely stupid for still supporting. The current structure only benefits corporations and bots.
1 reply →
A lot of people are unhappy with the state of the Internet and the safety of people of all ages on it. I believe we should be focusing on building a way to authenticate as a human of a nation without providing any more information, and try to raise the bar for astroturfing to be identity theft.
There's absolutely some astroturfing happening, but I wouldn't discount that there is some organic support as well. Journalists have been pushing total de-anonymization of the internet for a while now, and there are plenty of people susceptible to listening to them.
It does feel like a shift, and sometimes coordinated signaling. This post was at the top of this thread not long ago. Now it's all pro-age verification posts.
It's inauthentic at best. The four horsemen of the infocaplypse are drugs, pedos, terrorists, and money laundering - they trot out the same old tired "protect the children!" arguments every year, and every year it's never, ever about protecting children, it's about increasing control of speech and stamping out politics, ideology, and culture they disapprove of. For a recent example, check out the UK's once thriving small forum culture, the innumerable hobby websites, collections of esoteric trivia, small sites that simply could not bear the onerous requirements imposed by the tinpot tyrants and bureaucrats and the OSA.
It's never fucking safety, or protecting children, or preventing fraud, or preventing terrorism, or preventing drugs or money laundering or gang activities. It's always, 100% of the time, inevitably, without exception, a tool used by petty bureaucrats and power hungry politicians to exert power and control over the citizens they are supposed to represent.
They might use it on a couple of token examples for propaganda purposes, but if you look throughout the world where laws like this are implemented, authoritarian countries and western "democracies" alike, these laws are used to control locals. It's almost refreshingly straightforward and honest when a country just does the authoritarian things, instead of doing all the weaselly mental gymnastics to justify their power grabs.
People who support this are ignorant or ideologically aligned with authoritarianism. There's no middle ground; anonymity and privacy are liberty and freedom. If you can't have the former you won't have the latter.
I think it's complicated by the turning tide on the health effects of social media.
So people are kind of primed for "makes sense to keep kids from these attention driven platforms"
But I think the average person isn't understanding the implications of the facial/id scanning.
Could be astroturfing
> Has the vibe really shifted so much among tech-literate people?
Actually, yes, it seems to have shifted quite a bit. As far as I can tell, it seems correlated with the amount of mis/disinformation on the web, and acceptance of more fringe views, that seems to make one group more vocal about wanting to ensure only "real people" share what they think on the internet, and a sub-section of that group wanting to enforce this "real name" policy too.
It in itself used to be fringe, but really been catching on in mainstream circles, where people tend to ask themselves "But I don't have anything to hide, and I already use my real name, why cannot everyone do so too?"
I don't support ID for internet use, only for adult content specifically. There's things on Discord that would shock you to your core if you saw some of it, I don't think children should be blindly exposed to any of it. Specifically porn. Tumblr almost got kicked out of the app store over porn, they went the route of banning it and killing what to me felt like a dying social media platform as things stood.
Do you think strip clubs and bars should stop IDing people at the door? I don't. Why should porn sites be any different?
The difference is that at the strip club, you show your ID to the bouncer, who makes sure its valid and that the photo matches your face, and then forgets all about it. Online, that data is stored forever.
The principle of online ID checks is completely sound; the implementation is not.
10 replies →
Until sex, sexuality, and sexual fantasies no longer have any potential stigma associated with them, any sort of verification to view mature content is unacceptable.
That sort information can permanently destroy peoples lives.
I think the word you're missing is fatigue.
The average tech literate person keep seeing their data breached over and over again. Not because THEY did anything wrong, but because these Corpos can't help themselves. No matter how well the tech literate person secures their privacy it has become clear that some Corpo will eventually release everything in an "accident" that causes their efforts to become meaningless.
After a while it's only human for fatigue to build up. You can't stop your information from getting out there. And once it's out there it's out there forever.
Meanwhile every Corpo out there in tech is deliberately creating ways to track you and extract your personal information. Taking steps to secure your information ironically just makes you stand out more and narrows the pool you're in to make it easier to find you and your information. And again you're always just one "bug" from having it all be for nothing.
I still take some steps to secure my privacy, I'm not out there shouting my social security information or real name. But that's habit. I no longer believe that privacy exists.
To the extent we ever had it in the past was simply the insurmountable restrictions on tracking and pooling the information into some kind of organization and easy lookup. Now that it is easier and easier to build profiles on mass numbers of people and to organize those and rank them the illusion is gone. Privacy is dead. Murdered.
And people are tired of pretending otherwise.
People are saying privacy is dead for decades, yet privacy continually declines more and more. And there's still quite a ways it can go from here. The defeatist attitude only helps further erosion.
1 reply →
> Has the vibe really shifted so much among tech-literate people?
HN has largely shifted away from tech literacy and towards business literacy in recent years.
Needless to say that an internet where every user's real identity is easily verifiable at all times is very beneficial for most businesses, so it's natural to see that stance here.
You talk about tech literacy, but then conflate age verification with knowing someone's identity. You should see the work people are doing to perform age verification in a way that preserves privacy, for example the EU and Denmark.
I mean there has always been some part of the tech literate people that were like that, they were just less likely to post about it on forums. Heck after the eternal September it wasn't uncommon for 'jokes' about requiring a license to use the internet.
Its weird how all these 1,000 IQ innovators suddenly can't figure it out.
I dont think they want to figure it out. They think the internet should be stagnant unchanging and eternal as it currently exists because it makes the most money. If you disagree you're either a normie, bot, or need to parent harder or something. There is nothing you can do don't dare try to change it.
The audience shifted from tech-literate to the opposite.
Too may in tech for the money that’s for sure. No fundamentals, just drilled leetcode.
I think so. A lot of people think the internet now is a somewhat negative construct and don’t feel so strongly about it somewhat dying away.
Beware the vocal minority. Internet comment sections only tell you the sentiments of people who make comments.
HN comments sentiment seems to shift over the age of the thread and time of day.
My suspicion is that the initial comments are from people in the immediate social circle of the poster. They share IRC or Slack or Discord or some other community which is likely to be engaged and have already formed strong opinions. Then if the story gains traction and reaches the front page a more diverse and thoughtful group weighs in. Finally the story moves to EU or US and gets a whole new fresh take.
I’m not surprised that people who support something are the ones most tuned in to the discussion because for anyone opposed they also have their own unrelated thing they care about. So the supporters will be first since they’re the originators.
The average tech “literate” person uses discord, social media, a GitHub with their real name, a verified LinkedIn, and Amazon Echo.
These are not the same people from 30 years ago. The new generation has come to love big brother. All it took to sell their soul was karma points.
Many of the things you mention are also tools that many people use in a professional context which mostly doesn't work if you try to be anonymous. Yes, some people choose to be pseudonymous but that mostly doesn't work if your real-life and virtual identities intersect, such as attending conferences or company policies that things you write for company publications be under your real name.
I highly doubt the sentiment is from real humans. If anything, it proves that a web-of-trust-based-attestation-of-humanity is the real protection the internet needs.
The vibe has shifted quite a bit among the general populace, not just in tech.
The short version is that voters want government to bring tech to heel.
From what I see, people are tired of tech, social media, and enshittified apps. AI hype, talk of the singularity, and fears about job loss have pushed things well past grim.
Recent social media bans indicate how far voter tolerance for control and regulation has shifted.
This is problematic because government is also looking for reasons to do so. Partly because big tech is simply dominant, and partly because governments are trending toward authoritarianism.
The solution would have been research that helped create targeted and effective policy. Unfortunately, tech (especially social media) is naturally hostile to research that may paint its work as unhealthy or harmful.
Tech firms are burned by exposés, user apathy, and a desire to keep getting paid.
The lack of open research and access to data blocks the creation of knowledge and empirical evidence, which are the cornerstones of nuanced, narrowly tailored policy.
The only things left on the table are blunt instruments, such as age verification.
This is a VC site, when the revenue generating model of the Internet has strongly shifted into surveillance capitalism overdrive.
Cui bono?
> Has the vibe really shifted so much among tech-literate people?
Yes.
Or more honestly, there was always an undercurrent of paternalistic thought and tech regulation from the Columbine Massacre days [0] to today.
Also for those of us who are younger (below 35) we grew up in an era where anonymized cyberbullying was normalized [1] and amongst whom support for regulating social media and the internet is stronger [2].
The reality is, younger Americans on both sides of the aisle now support a more expansive government, but for their party.
There is a second order impact of course, but most Americans (younger and older) don't realize that, and frankly, using the same kind of rhetoric from the Assange/Wikileaks, SOPA, and the GPG days just doesn't resonate and is out of touch.
Gen X Techno-libertarianism isn't counterculture anymore, it's the status quo. And the modern "tech-literate" uses GitHub, LinkedIn, Venmo, Discord, TikTok, Netflix, and other services that are already attached to their identity.
[0] - https://www.nytimes.com/1999/05/02/weekinreview/the-nation-a...
[1] - https://www.nytimes.com/2013/09/14/us/suicide-of-girl-after-...
[2] - https://www.washingtonpost.com/politics/2022/06/09/why-young...
[dead]
Better to let a hundred people’s “privacy” be violated than to let another child be radicalized or abused or misled by online predation.
> Better to let a hundred Black men be hanged than to let another White woman be radicalized or abused or misled by their predation.
This is how you sound to me.
> “privacy”
Why did you put "privacy" in scare quotes?
1 reply →
I don't subscribe to this. I value my privacy more.
It’s bots pushing another false narrative. You’ll notice this in anything around politics or intelligence the past 10+ years, with big booms around 2016 and 2024 “for some reason”
No. There are significant numbers of real people who genuinely support this type of thing. Dismissing it as "bots" or a "false narrative" leads to complacency that allows this stuff to pass unchallenged.
1 reply →
I used to be so against this but after the never ending cat and mouse game with my kids (especially my son) I don't think the tech crowd really appreciates how frustrating it is and how many different screens there are.
Tons of data also showing higher suicide rates, depression rates, eating disorders etc. so it's not as if there is no good side to this.
If they are so intent on disobeying what makes you they won't just use a VPN or ask someone older to login for them (or any other workaround, depending on the technology)?
I think the tech crowd appreciates how hard it is to lock down access to tech, since they were the kids bypassing the restrictions
You are the one handing them those screens.
> Tons of data also showing higher suicide rates
Here is the data:
https://www.cdc.gov/mmwr/volumes/66/wr/mm6630a6.htm
and the more recent data:
https://afsp.org/suicide-statistics/
I was a child of the 90's, where the numbers were higher, where we had peak PMRC.
> depression rates
Have these changed? Or have we changed the criteria for what qualifies as "depression"? We keep changing how we collect data, and then dont renormalized when that collection skews the stats. This is another case of it, honestly.
> eating disorders
Any sort of accurate data collection here is a recent development:
https://pmc.ncbi.nlm.nih.gov/articles/PMC7575017/
> never ending cat and mouse game with my kids (especially my son)
Having lived this with my own, I get it. Kids are gonna be kids, and they are going to break the rules and push limits. When I think back to the things I did as a kid at their age, they are candidly doing MUCH better than I, or my peer group was. Drug use, Drinking, ( https://usafacts.org/articles/is-teen-drug-and-alcohol-use-d... ) teen pregnancy are all down ( https://www.congress.gov/crs-product/R45184 )
As a tech-literate person, I'm not 100% against the concept of ID if only because I think people will be more reasonable if they weren't anonymous.
This conflicts with my concerns about government crackdowns and the importance of anonymity when discussing topics that cover people who have a monopoly on violence and a tendency to use it.
So it's not entirely a black/white discussion to me.
Both Google and Facebook have enforced real identity and its not improved the state of peoples comments at all. I don't think anonymity particular changes what many people are willing to say or how they say it, people are just the creature you see and anonymity simple protects them it doesn't change their behaviour all that much.
I think opt-in ID is great. Services like Discord can require ID because they are private services*. Furthermore, I think that in the future, a majority of people will stay on services with some form of verification, because the anonymous internet is noisy and scary.
The underlying internet should remain anonymous. People should remain able to communicate anonymously with consenting parties, send private DMs and create private group chats, and create their own service with their own form of identity verification.
* All big services are unlikely to require ID without laws, because any that does not will get refugees, or if all big services collaborate, a new service will get all refugees.
The problem is this is only true for values of "reasonable" that are "unlikely to be viewed in a negative light by my government, job, or family; either now or at any time in the future". The chilling effect is insane. There was a time in living memory when saying "women should be able to vote" was not a popular thing.
I mean, this is _literally the only thing needed_ for the Trump admin to tie real names to people criticizing $whatever. Does anyone want that? Replace "Trump" with "Biden", "AOC", "Newsom", etc. if they're the ones you disagree with.
4 replies →
> I think people will be more reasonable if they weren't anonymous.
I've seen people post appalling shit on fuckin LinkedIn under their own names.
Strong moderation keeps Internet spaces from devolving into cesspools. People themselves have no shame.
1 reply →
That's what I believe as well. Anons have turned the internet into an unsafe cesspit. It's the opposite of a "town square."
1 reply →
When you’re young, the overwhelming and irrepressible desire to overcome society's proscriptions to satisfy your intellectual and sexual curiosity is natural and understandable. The open Internet made that easier than ever, and I enjoyed that freedom when I was younger—though I can’t say it was totally harmless.
When you’re older and have children—especially preteens and teenagers—you want those barriers up, because you’ve seen just how fucked up some children can get after overexposure to unhealthy materials and people who want to exploit or harm them.
It’s a matter of perspective and experience. As adults age, their natural curiosity evolves into a desire to protect their children from harm.
The only thing this is going to achieve is to bar unverified users form the vaguely reputable and mainstream places into the small, completely unregulated spaces, sites and networks.
I presume you prefer hard requirement of IDs.
I'm saying this will make kids go to i2p, tor, to the obscure fora in countries not giving a f* about western laws.
As a parent to the teens and teens, THIS makes me concerned. The best vpns are very hard to detect (I know, I try it myself).
1 reply →
So you basically want to prevent your children from doing what you did at their age?
And you don't mind that freedoms of all of us would be restricted as a result?
And then, we keep blaming boomers for those restrictions.
22 replies →
> When you’re older and have children—especially preteens and teenagers—you want those barriers up, because you’ve seen just how fucked up some children can get after overexposure to unhealthy materials.
You mean that you shirk your responsibility to teach your child how to protect themself on the Internet, and instead trust the faceless corp to limit their access at the cost of everyone's privacy? How does this make sense...
4 replies →
Age verification is age discrimination.
Some can be 50 and still be clueless who to trust and what to do.
Every kind of discrimination merely shifts the burden.
And thinking of the children as an excuse for draconian law is itself child abuse. It's using children as a shield to take cover.
EDIT: I'd like to add that if age verification becomes a thing, we should also have an online drug test, insurance verification, financial wellbeing tests, mental health checks and a badge of dishonor for anyone who fails to comply.
Not really. If you take their rhetoric at face value ("Think of the children") then sure, it undermines everyone's data protection.
I will go as far as to assume that no one on HN believes this is done for the children. It's been done to censor people and ID the majority of normies online. And when you think of this, the undermining of everyone's data protection is note an undesired side effect, it's the goal.
Is this article AI-generated? https://www.pangram.com/history/f421130b-eefc-4f8c-b380-da0a... . I find it worrying that authors don't disclose the amount of AI assistance used in drafting and editing upfront. It sets a worrying precedent where everything is AI unless proven otherwise.
Age Verification is very hard to do without exposing personal information (ask me how I know). I feel it should be solved by a platform company - someone like Apple (assuming we trust apple with our personal information but seems like we already do) - and the platform (ios) should be able to simply provide a boolean response to "is this person over 18" without giving away all the personal information behind the age verification.
Now the issue of which properties can "ask to verify your age" and "apple now knows what you're looking at" is still an unsolved problem, but maybe that solution can be delivered by something like a one time offline token etc.
But again, this is a very hard problem to solve and I would personally like to not have companies verify age etc.
Isn't it a simpler solution to create some protocol for a browser or device announce an age restricted user is present and then have parents lock down devices as they see fit?
Aside from the privacy concerns, all this age verification tech seems incredibly complicated and expensive.
I think this solution exists (e.g. android parental lock, but also ISP routers). But parents and industry have failed to do so on a greater scale. So legislation is going for a more affirmative action that doesn't require parental consent or collaboration.
A service provider of adult content now cannot serve a child, regardless of the involvement or lack thereof of a parent.
I'm certain there is a way to verify age without compromise of privacy or identity. I'm sure it's possible to build some oAuth like flow that could allow sites to verify both human-ness and age. The systems and corporations that gate that MUST (in the RFC sense) be separate from the systems and corporations that want the verification.
Do we need laws to make this happen? What methods can be used to aid adoption? Do site operators really want to know the humanness and ages or are those just masks on adding more surveillance?
I really do think age verification should be at the device level, not per app or website. Parents can and should lock down their children's devices. I know device manufacturers don't want to take on that legal burden, but the tech is _right there!_
The thing that needs to be age banned, or really just banned, is algorithmic feeds with infinite scroll. Kids (and adults) need to just interact with their friends, and block all the bait.
For adults, I think a legal opt-in-only policy would work well. And require reconsent with every major algorithm change.
What's always got me about this is when I was in school I had it absolutely drilled into me that I should never expose personal information online to anyone, I completely saw the logic in that and so heavily limit the personal data I give out. Now we're just expecting people to completely go against that and give away the most personal details possible to companies who cannot prove what they are or are not doing with it just because governments have decided that's best now?
The people pushing for this resent schools because they instill a sense of dignity in the population.
They are bothered that you were taught such things and have made sure that your children will never be exposed to such information.
Starts off with a flawed assumption, playing into the hands of people who want surveillance.
"The only way to prove that someone is old enough to use a site is to collect personal data about who they are. And the only way to prove that you checked is to keep the data indefinitely."
If you start by legislating that you can't collect personal data or ID, then you are forced to do your age verification through other means. And legislate the government can't see what websites a user is visiting if you can to stop overreach. End result is a workable solution, zero knowledge proof or similar where government (the source of your ID documents) signs a token brokered by a proxy.
But when you start arguments from the position of 'no way to do this without violating privacy', the end result will be to violate privacy, because it seems an awful lot of people are demanding age verification and will sacrifice if they believe it is necessary.
The elephant in the room is 'unverified' users will overwhelmingly be underage kids, and that absence will be tracked across the internet. This whole thing inadvertently exposes who are the kids vs the adults programmatically.
Second, if all it takes to get into underage spaces is not being verified, predators *will* notice and exploit this hole.
Even the absence of information is information.
> The Roblox games site, which recently launched a new age-estimate system, is already suffering from users selling child-aged accounts to adult predators seeking entry to age-restricted areas, Wired reports.
I rest my case.
Well the default state is "assume underage". So the default state is be in same location as children. There's nothing for predators to exploit, they get access by default.
Which once people realize that, it all becomes really silly. The only way it would really work is by verifying that people are children, so only children can be in the gated location. But then you need to do mass surveillance on children and I think even the average person realizes this just makes that a great place for predators and the damage caused by a leak is far greater to children. Not to mention the impractical nature of it as children are less likely to able to verify themselves and honestly, you expect kids to jump through extra hoops?
Anyone that believes these systems will keep predators away from children haven't thought about even the most basic aspects of how these systems work. They cannot do what they promise
Surprisingly there is solutions that work just fine.
It's like bankid or myid works in Scandinavian countries.
When you need to identify yourself you are challenged by a 3rd party trusted service.
Making this a age verification should be very easy.
https://www.mitid.dk/en-gb/about-mitid/?language=en-gb
kids will just use their parents' credentials. it's not an edge case, it will actually just be a default behavior, I promise. platforms need to be safe by default, but that's the problem - safe by default doesn't play nice with engagement. platforms need engagement for profit. chicken and egg.
Well there are technical solutions for this: blind signatures.
I could generate my own key, have the government blind sign it upon verifying my identity, and then use my key to prove I'm an adult citizen, without anyone (even the signing government) know which key is mine.
Any veryfying entity just need to know the government public key and check it signed my key.
The ID check laws are about matching an identity to a user account.
If the identity check was blind it wouldn't actually be an identity check. It would be "this person has access to an adult identity".
If there is truly no logging or centralization, there is no limit on how many times a single ID could be used.
So all it takes is one of those adult blind signatures to be leaked online and all the kids use it to verify their accounts. It's a blind process, so there's no way to see if it's happening.
Even if there was a block list, you would get older siblings doing it for all of their younger siblings' friends because there is no consequence. Or kids stealing their parents' signature and using it for all of their friends.
I don't quite get your point. The signer is blind to what it signs, but that does not mean there is no identity per se.
A signed key is still unique.
- You can still check that user 1 and user 2 don't use the same key.
- You can still issue a challenge to the user every 10 days to make sure he has indeed access to his key and not just borrowed it.
- You can still enforce TPM use of said keys, so that they cannot be extracted or distributed online, but require a physical ID card.
- You can still do whatever revocation system you want for the cases when a key is stolen or lost.
Really the "blind" nature of the signature changes nothing to what you would normally do with a PKI.
1 reply →
I was thinking the same thing. Why don't we just get a key from the government?
> Why don't we just get a key from the government?
Because one could argue that the government could keep track of the keys they give away.
That is where blind signing is interesting. The government can sign _your_ key without knowing it.
The internet isn’t the same as it was when we were growing up, unfortunately. I miss the days of cruising DynamicHTML while playing on GameSpy but… yeah. It became an absolute clusterfuck and I’m not surprised they now want to enforce age restrictions.
Maybe TBL is right and we need a new internet? I don’t have the answer here, but this one is too commercialized and these companies are very hawkish.
The powers that be are 'correcting' the mistake of the anonymous internet where anyone can call a politician fat and the police can't arrest them for it.
I would argue that this has nothing to do with age verification, but everything to do with getting identifiable data on all of us.
In my experience the people who want "privacy preserving age verification" are the same people who want "encryption backdoors but only for the good guys." Shockingly the technically minded among them do seem to recognize the impossibility of the latter, without applying the same chain of thought to the former.
They are fundementally different problems. It is already the governments job to maintain a record of their citizens and basic demographic information like age.
Private actors are already offering verification as a paid service. They are accumulating vast troves of private date to offer the service.
'You must be 13 years old or older to create an account on this forum.'
Yeah, sure. Whatever you say, Jack.
How about we accept age verification but every parliamentary type that voted in favor goes to jail for just one year for each data breach?
Practically that means all of them will be imprisoned for life, of course.
This is my problem with the Discord situation too:
Big tech don't have wait for an outright government ban when they can just say that we are a teen-only site by default and everyone have to verify if they are over 18 or not. This age verification will affect everyone no matter what.
people really believe this coordinated push across jurisdictions is about kids and verifying their age? this excuse to try to end pseudonimity on the web is as old as the mainstream internet itself
to a lot of people it never sat well that people could just go online and say whatever they want, and communicate with each other unsupervised at large scale, and be effectively untargetable while doing so - that model of the internet was only allowed because it happened under the radar and those uncomfortable with it have been fighting it since they got the memo
the companies pushing hardest for age verification are the same ones whose business model depends on knowing exactly who you are. the child safety framing is convenient cover for a data collection problem they were already trying to solve.
I‘m not too knowledgeable about this, but couldn’t you just provide a government issued key to every citizen and give a service provider that key and it‘s only valid if you’re above a certain age?
I don't get the alcohol analogy as in most places it's 100% legal for minors to consume alcohol in the home with parental permission in the USA. In public it's a different story.
"Verifying age undermines everyone's data protection"
That's the whole point, right? A pretense to remove any remaining anonymity from communications?
Governments are endlessly infested with the worst people. They look back at historical attempts at totalitarianism and think to themselves, "Let's facilitate something like that, but worse".
good breakdown of the tradeoffs but it kinda stops at critique. explains why age verification becomes invasive but doesnt really propose what a workable alternative looks like. feels like we need concrete models or architectures not just pointing out the trap.
Defending the free internet nowadays means defending these sv criminals.
It's insanely dangerous to have so much data stored on so many servers that are inevitably not maintained.
A lot of talk and no solutions. Exactly the reason we are where we are.
Liquor stores, bars, strip clubs, adult bookstores, or similar businesses don't let kids in. Movie theatres don't let a 10 year old in to an R-rated movie. The tech industry ignored their social responsibility to keep kids away from adult and age-inappropriate content. Now, they are facing legal requirements to do so. Tough for them, but they could have been more proactive.
If there's only a centralized system that uses digital IDs to hand off providers only a "yay" or "nay"...
Everything is a trade off in the world. I think that people who are anti-id ignore this but for me personally it’s harder and harder to accept the trade offs of an internet without id. AI has only accelerated this, I don’t want to live in a world where the average person unknowingly interacts with bots more than other individuals and where black market actors can sway public opinion with armies of bots.
I think most people are aligned here, and that an internet without identification is inevitable whether we like it or not.
Astroturfing was already a thing.
Identification fixes nothing here, you log with your account, plug in the AI.
The problems with social media have nothing to do with ID and everything to do with godawful incentives, the argument seems to be that it's a large price to pay but that it's worth it. Worth it for what? The end result is absolutely terrible either way
Astroturfing will still be a thing after ID. What, you think the government is going to go after their own bot armies?
7 replies →
1) ID checks will not close the trade off. Real IDs are easily available on the market. Thus criminals will used them no problem. It's the law-abiding privacy-minded people (like me) who would be hurt the most.
2) Your point is valid. I too want to know whether I am engaging with a bot or a person. This is impossible now and it will be impossible once ID check becomes ubiquitous.
3) I will be happy to see (or not) a blue checkmark by the profile name. Just like in Twitter. That's enough.
100% correct. At this point the harms to children from social media use are very well documented.
Like everything else in society, there are tradeoffs here, I'm much more concerned with the damage done to children's developing brains than I am to violations of data privacy, so I'm okay with age verification, however draconian it may be.
> At this point the harms to children from social media use are very well documented
Our middle child (aged 12) has an Android phone, but it has Family Link on it.
Nominally he gets 60 mins of phone time per day, but he rarely even comes close to that, according to Family Link he used it for a total of 17 minutes yesterday. One comes to the conclusion that with no social media apps, the phone just isn't that attractive.
He seems to spend most of his spare time reading or playing sports...
3 replies →
We need to destroy privacy and anonymity online for the noble goal of the government banning teenagers from looking at Twitter and Instagram?
If it's a concern, parents can prevent or limit their children's use. If all this were being done to prevent consistent successful terrorist attacks in the US with tens of thousands of annual casualties, I'd say okay maybe there is an unavoidable trade-off that must be made here, but this is so absurd.
2 replies →
Do you genuinely believe the major tech companies and gov reps actually want to close their addiction revenue taps?
No the argument is bad actors will reliably find a way to bypass these systems at an industrial scale while you'll instead snag honest people instead.
Look at the facebook real name policy.
This sounds a lot like the pro-gun rhetoric of bogging down the "good" gun owners but not doing enough to the "bad" gun owners.
2 replies →
> I think that people who are anti-id ignore this
No we do not.
>I don’t want to live in a world where the average person unknowingly interacts with bots more than other individuals and where black market actors can sway public opinion with armies of bots.
That is not the argument for identification on many places on the internet. It's not even the argument that the gov reps pushing it typically make. And why would it be. The companies that go along with all this don't want to get rid of all bots and public opinion campaigns. They make money off of many of those.
You're not thinking more than one step ahead. If you let a third party define who "has ID", "is human", etc. you give that third party control over you. You already gave control of your attention away to the sites who host the UGC, now you also give away control of your sense of reality.
At any point they can tell a real human what they can and can't say, and if they go against their masters, their "real human" status is revoked, because you trust the platform and not the person.
If we want to go full conspiritard, we could accuse those of wanting to control speech to be the financial backers of those flooding social media with AI slop: https://www.youtube.com/watch?v=-gGLvg0n-uY -- this fictional video thematically marries Metal Gear Solid 2's plot with current events: "perfect AI speech, audio and video synthesis will drown out reality [...] That is when we will present our solution: mandatory digital identity verification for all humans at all times"
I am though. In the world I live in I already have to give power over myself to corporations and a government, I don’t buy this as an argument for continuing to let internet companies skirt existing laws.
1 reply →
(Disclaimer: American perspective)
Why don't we have PKI built in to our birth certificates and drivers licenses? Why hasn't a group of engineers and experts formed a consortium to try and solve this problem in the least draconian and most privacy friendly way possible?
newer passports and driver licenses do.
ID verification doesn't protect against that. Why not? Because there are a lot of people that will trade their ID for a small amount of money, or log someone/something in. IDs are for sale, like everyone who was ever a high school student knows for "some" reason.
Plus what you're asking would require international id verification for everyone, which would first mostly make those IDs a lot cheaper. But there's a second negative effect. The organizations issuing those IDs, governments, are the ones making the bot armies. Just try to discuss anything about Russia, or how bad some specific decision of the Chinese CCP is. Or, if you're so inclined: think about how having this in the US would mean Trump would be authorizing bot armies.
This exists within China, by the way, and I guarantee you: it did not result in honest online discussion about goods, services or politics. Anonymity is required.
I am so surprised by the comments on this thread. I was not expecting to see so many people on Hacker News in favor of this. As is typically the case with things like this, the reasoning stems from agreeing with the goal of age verification, with little regard to whether age verification could ever actually work. It reminds me in some sense to the situation with encryption where politicians want encryption that blocks "the bad guys" while still allowing "the good guys" to sneak in if necessary. Sure, that sounds cool, it's not possible though. I suppose DRM is a better analogue here, an increasingly convoluted system that slowly takes over your entire machine just so it can pretend that you can't view video while you're viewing it.
To be clear, tackling the issue of child access to the internet is a valuable goal. Unfortunately, "well what if there was a magic amulet that held the truth of the user's age and we could talk to it" is not a worthwhile path to explore. Just off the top of my head:
1. In an age of data leaks, identity theft, and phishing, we are training users to constantly present their ID, and critically for things as low stakes as facebook. It would be one thing if we were training people to show their ID JUST for filing taxes online or something (still not great, but at least conveys the sensitivity of the information they are releasing), but no, we are saying that the "correct future" is handing this information out for Farmville (and we can expect its requirement to expand over time of course). It doesn't matter if it happens at the OS level or the web page level -- they are identical as far as phishing is concerned. You spoof the UI that the OS would bring up to scan your face or ID or whatever, and everyone is trained to just grant the information, just like we're all used to just hitting "OK" and don't bother reading dialogs anymore.
2. This is a mess for the ~1 billion people on earth that don't have a government ID. This is a huge setback to populations we should be trying to get online. Now all of a sudden your usage of the internet is dependent on your country having an advanced enough system of government ID? Seems like a great way for tech companies to gain leverage over smaller third world companies by controlling their access to the internet to implementing support for their government documents. Also seems like a great way to lock open source out of serious operating system development if it now requires relationships with all the countries in the world. If you think this is "just" a problem of getting IDs into everyone's hands, remember that it a common practice to take foreign worker's passports and IDs away from them in order to hold them effectively hostage. The internet was previously a powerful outlet for working around this, and would now instead assist this practice.
3. Short of implementing HDCP-style hardware attestation (which more or less locks in the current players indefinitely), this will be trivially circumvented by the parties you're attempting to help, much like DRM was.
Again, the issues that these systems are attempting to address are valid, I am not saying otherwise. These issues are also hard. The temptation to just have an oracle gate-checker is tempting, I know. But we've seen time and again that this just (at best) creates a lot of work and doesn't actually solve the problem. Look no further than cookie banners -- nothing has changed from a data collection perspective, it's just created a "cookie banner expert" industry and possibly made users more indifferent to data collection as a knee-jerk reaction to the UX decay banners have created on the internet as a whole. Let's not 10 years from now laugh about how any sufficiently motivated teenager can scan their parent's phone while they're asleep, or pay some deadbeat 18 year-old to use their ID, and bypass any verification system, while simulateneously furthering the stranglehold large corporations have over the internet.
Whatever happened to all the innovation the tech world was capable of? This is 100% a solvable problem. It only needs the will and good law.
1) Person signs up with discord with fake name and fake email.
2) Discord asks (state system) for an age validation.
3) In pop up window, state validates the persons age with ID matching with face recognition.
3) State system sends token to discord with yes or no with zero data retention in the state records.
4) Discord takes action on the account.
What is so hard about this?
Innovation doesn't need a big brother looking over your shoulder all the time. We have enough tracking tech on the web, no need to make it official as well. So yes, the "will" is missing, especially with the many misbehaviour states have shown towards privacy and surveillance.
A little legislative change and you can kiss your zero-proof goodbye if any infrastructure is established. This is about making intelligent decisions in your life. Your suggestion is far from innovative.
We will see real innovation in mechanisms to sideline age verification.
I want to sincerely ask whether you read my post, because your response is so unrelated I believe you might accidentally be responding to another post? If so, please ignore the rest, which is only intended in the case where you are actually responding to what I wrote.
Your system seems to address none of the issues I listed. For example, I argue that one difficulty is in the fact that these systems would be highly phishable -- a property that is present in your described "easy" solution. Your system trains users to become accustomed to being pestered by pop up windows that ask to see their ID and use their camera. Congrats, I can now trivially make a pop up a window that looks like this UI and use it to steal your info, as the user will just respond on auto-drive, as we have repeatedly shown both in user studies and in our own lived experiences. I also explained how a system like this would assist in the practice of trapping migrant workers by confiscating their government credentials [1]. This is a huge problem today in Asia, and one of the few outlets captive workers can use to escape this control is the internet -- a "loophole" your system would dutifully close for these corporations.
I am happy to have a discussion about this -- it's how we come up with new solutions! But that requires reading and responding to the concerns I brought up, not assuming that my issue is that I can't imagine implementing a glorified OAuth login flow.
1. There's tons of articles about this, here is one of the first ones that comes up on Google: https://www.amnesty.org/en/latest/news/2025/05/saudi-arabia-...
2 replies →
Every age verification scheme is really an identity verification scheme, "age" is just the acceptable entry point. Once the infrastructure exists to verify you are 18, it can verify you are not on a watchlist, verify your creditworthiness, verify your political associations.
You are not building a parental filter. You are building rails.
"Protect the children" is the canonical playbook for every surveillance expansion since forever. The children get protected for six months. The infrastructure stays forever.
Palantir.
Isn't hosting stuff on tor/i2p the solution here? Putting stuff technically out of bounds of regulation had always seemed like the ideal end goal
I wonder how much time we have before being asked to enter the government issued ID in a card reader so websites can read age and biometric data from the chip.
Hence why Illinois has already mame it illegal.
Isn't clear whether the paradox is biometric verification or ID data collection.
It’s the same as scanning for CSAM, or encryption-backdoors “to catch criminals”.
Of course we hate child abuse.
Of course we hate criminals.
Of course we hate social media addicting our kids.
But they’re just used as emotional framing for the true underlying desire: government surveillance.
(For the record: I am not into conspiracy theories; the EU has seen proposals for - imho technically impossible - “legally-breakable encryption” alone in 2020, 2022, and 2025; now we”ll also see repeated attempts at the “age verification” thing to force all adults to upload their IDs to ‘secure’ web portals)
It's amazing how much it's possible to foment arguments against something if you are very well funded and a regulation will cost your industry a lot of money.
Age verification is a good thing. Giving children unrestricted access to hardcore pornography is bad for them. Whatever arguments you want to make, fundamentally this is true.
Age verification is fundamentally harmful and is an attack on user privacy. Age verification is being heavily lobbied for by tech companies that are hoping to get rich off of violating your privacy.
Anonymous age verification is fundamentally impossible. It is especially a bad idea for adult content, as a person's perfectly legal sexual beliefs and fantasies can permanently destroy their lives if that information got out. Parental controls are the only ethical, secure, and privacy protecting way forward here.
parents: won't somebody else put some rules and safeguards in place to protect my children?
Most of them probably don't even have kids of their own, they just hate social media for exposing children to conservative ideas and want to ban it to prevent that.
More like fear of exposing their children to anything other than carefully curated "conservative" ideas.
I think this should work like OpenID connect but with just a true/false.
PS = pr0n site
AV = age verification site (conforming to age-1 spec and certified)
No other details need to be disclosed to PS.
I don't know if this is already the flow, but I suspect AV is sending name, address, etc... All stuff that isn't needed if AV is a certified vendor.
That solution still violates user privacy.
A better solution would be a simple "minor" flag that is only included on the devices of minors. No third party verification required for adults.
That works until minor-removing-flag proxys start popping up.
All of my kids devices are identified, at device level, as children's devices. They could've trivially exposed this as metadata to allow sites to enforce "no under 18" use. However, I'd disagree that my bigger concern for my kids isn't that they'd see a boob or a penis, but that they'd see an influencer who'd try to radicalize them to some extremist cause, and that's usually not considered 18+ content.
And either way, none of that requires de-anonymizing literally everyone on the internet. I'd be more than happy to see governments provide cryptographically secure digital ID and so that sites can self-select to start requiring this digital ID to make moderation easier.
The problem with that is that sites/apps will retain the identifier, either to use the Digital ID for login (not just one-time age verification), because they want to retain as much information as possible for later usage or sale, or because a government told them they have to retain it so all their social media activity can be easily linked to them.
I have been a kid and now a parent. It is impossible with the tools available to proof kids from the internet. If it's not a parent it's at a friends house, school devices, or a dedicated sense of curiosity.
Two things tech companies want to protect:
The perception of anonymity
Who gets to collect that information
I agree that a smaller loop of people should have that data but the loop is growing every day.
So if it ruins the perception anonymity for young naieve users so be it.
I'm not saying it's impossible to be somewhat anon, I'm just saying untrained users should understand the environment they're interacting with before they get hooked on useless products.
Isn't this the same debate as airports post 9/11, whether you can have both privacy and security? Seems conclusive, no.
My main takeaway from this is that politicians seem to have given up on making "social media" less harmful by regulating it, and instead focus on gatekeeping access, with the added perk of supplying security services and ad tyrants with yet another data pump.
This thread is gonna be full of HN users blaming the parents for a systemic problem isn’t it?
Yup.
"The only way to prove that someone is old enough to use a site is to collect personal data about who they are."
This is not true, as others have pointed out. Kind of sad to see no mention of privacy-preserving technology already in use in an IEEE article.
Imagine an OIDC type solution but for parents might work here.
Basically, kids can sign up for an account triggering a notification to parents. The parent either approves or rejects the sign in. Parents can revoke on demand. See kids login usage to various apps/services. Gets parental restrictions in the login flow without making it a PITA.
It's just another way to surveil the population and won't cause any real problems for anyone who can work around it.
(thats the point)
All adults proof their identify multiple times per month: Every time they access digital health records, or when they use any electronic payment.
Just make Google/Apple reveal part of that data (age > x years) to websites and apps.
Boom, done. Privacy guarded. Easy.
>Every time they access digital health records, or when they use any electronic payment.
This is the internet.
Among those who were very familiar with it, the smartest money never started doing things like that.
Imagine if regular TV with age warnings in the beginning of programs made you fax your ID to the channel headquarters before you could watch PG13+ or whatever movie. This is obviously a privacy nightmare and a suboptimal solution.
A good solution that respects privacy and helps reduce the exposure to harmful content at a young age is not very obvious though (but common sense and parental guidance seems to be the first step)
zero knowledge cryptography solves this
Only in theory, and only if you blindly trust a third party. The implementations in practice are still massive privacy violations.
what third party do you need? The program can be opensource. The verification can be done onchain
Is there any production ready implementation out there?
ZKP is integrated in Google Wallet and has been running in production for a few months. We (Google) released the ZKP library as open source last year (this is the library used in production).
Announcement: https://blog.google/innovation-and-ai/technology/safety-secu...
Library: https://github.com/google/longfellow-zk
Paper: https://eprint.iacr.org/2024/2010
Afterwards, folks from ISRG produced an independent implementation https://github.com/abetterinternet/zk-cred-longfellow with our blessing and occasional help. I don't know if the authors would call it "production ready" yet, but it is at least code complete, fast enough, and interoperable with ours.
1 reply →
That’s the point: enable mass surveillance and thee loss of privacy under the guise of another cause, usually “protecting the children”.
I feel like the ending undermines the whole piece. Throwing your hands up and going "we should do nothing" isn't really a solution. If a compromise exists I think it's adding age requests on device setup. There wouldn't be any verification but it could be used as a way to limit access to content globally. Content provides would just need a simple API to check if the age range fits and move right along.
This puts more onus on parents and guardians to ensure their child's devices are set up correctly. The system wouldn't be perfect and people using something like Gentoo would be able to work around it, but I think it helps address the concerns. A framework would need to be created for content providers to enforce their own rating system but I don't think it's an impossible task. It obviously wouldn't cover someone not rating content operating out of Romania, but should be part of the accepted risk on an open internet.
Personally I do agree with the "do nothing" stance, but I don't think it's going to hold up among the wider public. The die is cast and far too many average people are supporting moves like this. So the first defense should be to steer that conversation in a better way instead of stonewalling.
> Personally I do agree with the "do nothing" stance, but I don't think it's going to hold up among the wider public. The die is cast and far too many average people are supporting moves like this. So the first defense should be to steer that conversation in a better way instead of stonewalling.
I agree with this, and I find it frustrating how many people refuse to see this. It seems a lot of people would rather be "right" than compromise and keep the world closer to their stated values.
I don't see why platforms would have to store the data indefinitely.
Once you are verified, you just flip a bit "verified" in the database and delete all identification data.
No reason to store the data indefinitely
Big Tech refused to work together to implement a age flag that parents would setup on the children device, now we get each European and each USA state with their own special rules.
That was the goal.
I can understand the need to restrict some stuff kids can see, like when I was a teen it me hours and hours to download one 2 minute porn clip from kazaa, but these days you could download a lifetime worth in one weekend. That can't be healthy.
That being said nothing about these laws is about protecting children; their primary purpose is to crack down on the next Just Stop Oil or Palestine Action so for that reason should be opposed.
> their primary purpose is to crack down on the next Just Stop Oil or Palestine Action so for that reason should be opposed.
Could you explain to me how a digital id standard involving a mechanism for zero knowledge proof of identity is supposed to help the government crack down on activists (skipping the fact you are using UK examples for a EU spec)?
Take into account the reality we live in where every communication platform already requires you to provide your phone number and you can't get a phone number without providing an ID the European Union please, not some disconnected threat model.
> ...but these days you could download a lifetime worth in one weekend.
Uh. I could check in the back of my parents' closet (hidden under some fabric) for at least a decade's worth of dirty magazines. It's true that that's less than a lifetime's worth of pictures and articles, but I'd say that that's effectively equivalent.
> That can't be healthy.
The only thing that's unhealthy is not being able to talk frankly and honestly about sex and sexuality with your peers, parents, and other important adults in your life. Well, that and never being told that sex leads to pregnancy, or how to recognize common STDs... but you're likely to get that "for free" if you're able to talk frankly and honestly about sex and sexuality.
It's to continue the culture of bullying and lack-of-accountability by and for the perversely rich oligarchy.
For you'll need to be accounted while they do the counting.
If government is concerned shouldn't government just deliver auth based on birth certificate for everyone to use?
In most countries is illegal for small children to drive or to use fire arms. And it's their parents job to not let them to.
Instead of requiring IDs, we should let parents manage what their children do online.
"Think of the children" is merely a political argument to get a law to be popular among normal people.
I'm happy to see the IEEE talking about this, and to see the topic getting attention in the technical press. The problem is, I think (for the most part) that technical people already Get It. Who we need to convince are average, non-technical voters who don't know, don't understand, think it's a good thing, or would happily jump feet-first into a wood chipper if someone told them it would protect the children.
I also suspect that social media has damaging effects on kids, and they probably shouldn't have access to it, but not like this. I'd probably be quicker to support something like saying that individuals <18 aren't allowed to buy or possess a phone or tablet that has access to an app store or web browser, and only offers voice- and text-based communications channels. Ok, so now it all happens on a laptop? What's "a tablet?" Is a Chromebook a tablet? It's fucking impossible.
this is a broader parenting problem, the state doesn't need to do this
politicians are interested in it because they're begging for some way to censor the internet, which would actually be even worse for parenting because now it prevents children from ever learning to be responsible with these highly addictive platforms
30 years of internet were possible with relative freedom, without spying and surveillance. All of the sudden it's not possible.
Governments recycle "Think of the children" mantra and they are again after terrorists and bad guys.
Mandatory age check is not going to reduce the number of criminals online. Period.
We should focus on teaching parents how to educate their children properly, and teach children how to safely browse the internet and how to avoid common scams and pitfalls.
I played Roblox when I was a teenager and all the time my aunt told me to be careful of who I talked to online, as they could be a pedo. Even though there wasn't a constant monitoring from my parents or family, her words were repeated many times that I actually thought 5 times before sharing any kind of personal information online, back then.
Helping parents would be useful rather than telling them to "just do it".
The pain in trying to set up fortnite and minecraft online as parents is unsummounted, involving creating half a dozen accounts with different companies just to get some form of control. Far easier to just give them an adult account.
The process to create a child account should be seamless and no harder than creating an adult account.
>Governments recycle "Think of the children" mantra and they are again after terrorists and bad guys.
nope, they are going after dissenters, not bad guys. It's how it always ends up.
I don’t understand this.
Age verification is not more difficult than a payment system.
I mean, if I can pay on a website without the website to know my credit card number, I should be able to prove my age without the website to know anything about me either.
France has a ID verification system for all its service. You’d think they should be able to provide a hook that lets people prove they’re over the age limit to any third party without the third party knowing. It seems fairly basic.
There is a solution to this.
There are privacy issues on the internet, but I think this ain’t one.
if you are paying for internet access you have to be over 18, no?
and if you have internet access without paying, that means someone else is legally responsible for your access
"problem solved" ?
Famously children can only access internet from wifi paid for by their parents.
I'm not for these draconian age verification nonsense, but this isn't a valid argument.
It is a valid alternative avenue towards a legal implementation of "child safeguarding" IMO. Someone pays for the internet, that person is responsible for what minors do on their connection. If they have trouble doing that we can use normal societal mechanisms like idk social services, education, and government messaging.
This is the way it works with e.g. alcohol and cigarettes, most places. Famously kids can just get a beer from a random fridge and chug it, but someone 16/18/21+ will be responsible and everyone seems mostly fine with this.
3 replies →
This is the answer. If you provide internet access to someone, you're responsible for it. It's a generally established law from a Torrenting PoV, so isn't it equally applicable to downloading content unsuitable for children. Sure it'll destroy offering free wifi, but that always was tricky from a legal PoV around responsibilities.
Ideally the law would require websites (and apps) to provide some signed age requirement token to the client (plus possibly classification) instead of the reverse. Similarly OS and web clients should be required to provide locked down modes where the maxium age and/or classification could be selected. As a parent I would the be able to setup my child device however I wish without loss of privacy.
Is it bypassable by a sufficiently determined child? Yes, but so it is the current age verification nonsense.
> if you are paying for internet access you have to be over 18, no?
No, that's not the case.
every contract by every ISP i have ever signed has required me to be over the age of 18 to enter the contract.
4 replies →
unless your kid never goes to public school that isn't true
or goes outside at all. free wifi is everywhere
1 reply →
I have a problem with an open internet and allowing open access to everything the internet can offer to young children.
It cannot be a friction-less experience. Allowing children to see gore and extreme porn at a young age is not healthy. And then we have all the "trading" platforms (gambling).
Even though my brothers were able to get many hard drugs when I was young, around 1977, there was a lot of friction. Finding a dealer, trusting them, etc. Some bars would not card us but even then there was risk and sometimes they got caught. In NY we could buy cigarettes, no friction, and the one drug I took when I was young, addicted to them at 16, finally quitting for good at 20. I could have used some friction there.
So how do we create friction? Maybe hold the parents liable? They are doing this with guns right now, big trial is just finishing and it looks like a father who gave his kid an ak47 at 13 is about to go to jail.
I would like to see a state ID program when the ID is just verified by the State ID system. This way nothing needs to be sent to any private party. Sites like Discord could just get a OK signal from the state system. They could use facial recognition on the phone that would match it with the ID.
Something needs to be done however. I disagree that the internet needs to be open to all at any age. You do not need an ID to walk into a library, but you need one to get into a strip club. I do not see why that should not be the same on the internet.
> The only way to prove that someone is old enough to use a site is to collect personal data about who they are. And the only way to prove that you checked is to keep the data indefinitely.
Well isn't this premise false from the get go? Many countries (not the US sure, but others) have digitised ID. Services can request info from the ID provider; in this case social media websites would simply request a bool isOver16, literally one bit of information, to grant access. No other information needs to be leaked, and no need for idiotic setups like sending photos of your passport to god knows what website (or god knows what external vendor that website uses for ID verification).
Seems silly to worry about this when social media itself is predicated on collecting gigabytes of data about you daily.
Again, this is not about half assed solutions that force you to send photos of your passport to websites. That's a terrible idea for the reasons discussed in the article. But it's obviously false that this is the only way.
Corporate interests don’t care about data privacy or security they care about liability and compliance which are not the same thing.
Major banks and government institutions can’t even be bothered to implement the NIST password guidelines. If they got their gdpr soc2 fedramp whatever it’s green lights and the rest is insurance.
Same people who are on Epstein files wants to protect children?
No. They want to fuck everybody.
The point is to undermine data protection; this debate is useless. It's a question about power and control, not a technical one. The people lobbying for this don't care about children, and neither are they getting big support from a constituency clamoring for this. This is an intelligence initiative, and a donor initiative from people who are in a position to control the platform (all computing and communications) after it is locked down.
It's not even worth talking about online. There's too much inorganic support for the objectives of nation-states and the corporations that own them.
Legislation has been advanced in Colorado demanding that all OSes verify the user's age. It will fail, but it will be repeated 100 times, in different places, smuggled attached to different legislation, the process and PR strategies refined and experimented with, versions of it passed in Australia, South Korea, maybe the UK and Europe, and eventually passed here. That means that "general purpose" computing will be eventually be lost to locked bootloaders.
https://www.pcmag.com/news/colorado-lawmakers-push-for-age-v...
[edit: I'm an idiot, they already passed it in California https://www.hunton.com/privacy-and-cybersecurity-law-blog/ca...]
And it will be an entirely engineered and conscious process by people who have names. And we will babble about it endlessly online, pretending that we have some control over it, pretending that this is a technical discussion or a moral discussion, on platforms that they control, that they allow us to babble on as an escape valve. Then, one day the switch will flip, and advocacy of open bootloaders, or trading in computers that can install unattested OSes, will be treated as organized crime.
All I can beg you to do is imagine how ashamed you'll be in the future when you're lying about having supported this now, or complaining that you shouldn't have "trusted them to do it the right way." Don't let dumb fairytales about Russians, Chinese, Cambridge Analytics and pedophile pornography epidemics have you fighting for your own domination. Maybe you'll be the piece of straw that slows things down just enough that current Western oligarchies collapse before they can finish. Maybe we'll get lucky.
Polls and ballots show that none of this stuff has majority organic support. But polls can be manipulated, and good polls have to be publicized for people to know they're not alone, and not afraid they're misunderstanding something. If both candidates on the ballot are subverted, the question never ends up on the ballot.
The article itself says nothing that hasn't been said before, and stays firmly under the premise that access to content online by under-18s is suddenly one of the most critical problems of our age, rather than a sad annoyance. What is gained by having this dumb discussion again?
It's crazy to me that we want to force age verification on every service across the Internet before we ban phones in school. I could understand being in favor of both, or neither, but implementing the policy that impacts everybody's privacy before the one that specifically applies within government-run institutions is just so disappointingly backwards it's tempting to consider conspiracy-like explanations.
The advantage, I think, of age verification by private companies over cellphone bans in public schools is that cellphone bans appear as a line-item on the government balance sheet, whereas the costs of age verification are diffuse and difficult to calculate. It's actually quite common for governments to prefer imposing costs in ways that make it easier for the legislators to throw up their hands and whistle innocently about why everything just got more expensive and difficult.
And the argument over age verification for merely viewing websites, which is technically difficult and invasive, muddles the waters over the question of age verification for social media profiles, where underage users are more likely to get caught and banned by simple observation. The latter system has already existed for decades -- I remember kids getting banned for admitting they were under 13 on videogame forums in the '00s all the time. It seems like technology has caused people to believe that the law has to be perfectly enforceable in order to be any good, but that isn't historically how the law has worked -- it is possible for most crimes to go unsolved and yet most criminals get caught. If we are going to preserve individual privacy and due process, we need to be willing to design imperfect systems.
> It's crazy to me that we want to force age verification on every service across the Internet before we ban phones in school.
France banned phones in elementary and noddles schools in 2018. It's not the only European country to have done so.
As a parent, I'm happy that social bans are finally a thing.
But, I don't get the approach. It's not like social media starts being a positive in our life at 20. The way these companies do social media is harmful to mental health at every age. This is solving the wrong problem.
The solution is to take away their levers to make the system so addictive. A nice space to keep in touch with your friends. Nothing wrong with that.
I'm going to state that at one point I was one of the young people this kind of legislation is meaning to protect. I was exposed to pornography at too young an age and it became my only coping mechanism to the point where as an adult it cost me multiple jobs and at one point my love life.
I don't think this legislation would have helped me. I found the material I did outside of social media and Facebook was not yet ubiquitous. I did not have a smartphone at the time, only a PC. I stayed off social media entirely in college. Even with nobody at all in my social sphere, it was still addicting. There are too many sites out there that won't comply and I was too technically savvy to not attempt to bypass any guardrails.
The issue in my case was not one of "watching this material hurt me" in and of itself. It was having nobody to talk to about the issues causing my addiction. My parents were conservative and narcissistic and did not respect my privacy so I never talked about my addiction to them. They already punished me severely for mundane things and I did not want to be willingly subjected to more. To this day they don't realize what happened to me. The unending mental abuse caused me to turn back to pornography over and over. And I carried a level of shame and disgust so I never felt comfortable disclosing my addiction to any school counselors or therapists for decades. The stigma around sexual issues preventing people from talking about them has only grown worse in the ensuing years, unfortunately.
At most this kind of policy will force teenagers off platforms like Discord which might help with being matched with strangers, but there are still other avenues for this. You cannot prevent children from viewing porn online. You cannot lock down the entire Internet. You can only be honest with your children and not blame or reproach them for the issues they have to deal with like mine did.
In my opinion, given that my parents were fundamentally unsafe people to talk to, causing me to think that all people were unsafe, then the issue of pornography exposure became an issue. In my case, I do not believe there was any hope for me that additional legislation or restrictions could provide, outside of waking up to my abuse and my sex addiction as an adult decades later. Simply put, I was put into an impossible situation, I didn't have any way to deal with it as a child, and I was ultimately forsaken. In life, things like those just happen sometimes. All I can say was that those who forsook me were not the platforms, not the politicians, but the people who I needed to trust the most.
I believe many parents who need to think about this issue simply won't. The debate we're having here on this tech-focused site is going to pass by them unnoticed. They're not going to seriously consider these issues and the status quo will continue. They won't talk with their children to see if everything's okay. I don't have many suggestions to offer except "find your best family," even if they aren't blood related.
"In cases when regulators demand real enforcement rather than symbolic rules, platforms run into a basic technical problem. The only way to prove that someone is old enough to use a site is to collect personal data about who they are."
These so-called "platforms" already collect data about who people are in order to facilitate online advertising and whatever else the "platform" may choose to do with it. There is no way for the user to control where that data may end up or how it may be used. The third party can use the data for any purpose and share it with anyone (or not). Whether they claim they do or don't do something with the data is besides the point, their internal actions cannot be verified and there are no enforceable restrictions in the event a user discovers what they are doing and wants to stop them (at that point it may be too late for the user anyway)
"Tech" journalists and "tech bros" routinely claim these "platforms" know more about people than their own families, friends and colleagues
That's not "privacy"
Let's be honest. No one is achieving or maintaining internet "privacy" by using these "platforms", third party intermediaries (middlemen) with a surveillance "business model", in order to communicate over the internet
On the contrary, internet "privacy" has been diminishing with each passing year that people continue to use them
The so-called "platforms" have led to vast repositories of data about people that are used every day by entities who would otherwise not be legally authorised or technically capable of gathering such surveillance data. Most "platform" users are totally unaware of the possibilities. The prospect of "age verification" may be the wake up call
"Age verification" could potentially make these "platforms" suck to a point that people might stop using them. For example, it might be impossible to implement without setting off users' alarm bells. In effect, it might raise more awareness of how the vast quantity of data about people these unregulated/underregulated third parties collect "under the radar" could be shared with or used by other entities. Collecting ID is above the radar and may force people to think twice
The "platforms" don't care about "privacy" except to control it. Their "business model" relies on defeating "privacy", reshaping the notion into one where privacy from the "platform" does not exist
Internet "privacy" and mass data collection about people via "platforms" are not compatible goals
"... our founders displayed a fondness for hyperbolic vilification of those who disagreed with them. In almost every meeting, they would unleash a one-word imprecation to sum up any and all who stood in the way of their master plans.
"Bastards!" Larry would exclaim when a blogger raised concerns about user privacy."
- Douglas Edwards, Google employee number 59, from 2011 book "I'm feeling lucky"
If a user decides to stop using a third party "platform" intermediary (middleman) that engages in data collection, surveillance and ad services, for example, because they wish to avoid "age verification", then this could be the first step toward meaningful improvements in "internet privacy". People might stop creating "accounts", "signing in" and continuing to be complacent toward the surreptititious collection of data that is subsequently associated with their identity to create "profiles"
[dead]
[dead]
[dead]
[dead]
[dead]
[flagged]
[flagged]
>"None of this is an argument against protecting children online. It is an argument against pretending there is no tradeoff"
Tradeoff acknowledged, and this runs both sides, there's hundreds of risks that these policies are addressing.
To mention a specific one, I was exposed to pornography online at age 9 which is obviously an issue, the incumbent system allowed this to happen and will continue to do so. So to what tradeoffs in policy do detractors of age verification think are so terrible that it's more important than avoiding, for example, allowing kids first sexual experiences to be pornography. Dystopian vibes? Is that equivalent?
Or, what alternative solutions are counter-proposed to avoid these issues without age verification and vpn bans.
Note 2 things before responding:
1)per the original quote, it is not valid to ignore the trade offs with arguments like "child abuse is an excuse to install civilian control by governments"
2) this was not your initiave, another group is the one making huge efforts to intervene and change the status quo, so whatever solution is counterproposed needs to be new, otherwise, as an existing solution, it was therefore ineffective.
If any of those is your argument, you are not part of the conversation, you have failed to act as wardens of the internet, and whatever systems you control will be slowly removed from you by authorities and technical professionals that follow the regulations. Whatever crumbs you are left as an admin, will be relegated to increasingly niche crypto communities where you will be pooled with dissidents and criminals of types you will need to either ignore or pretend are ok. You will create a new Tor, a Gab, a Conservapedia, a HackerForums, and you will be hunted by the obvious and inequivocal right side of the law. Your enemy list will grow bigger and bigger, the State? Money? The law? God? The notion of right and wrong which is like totally subjective anyways?
> I was exposed to pornography online at age 9....allowing kids first sexual experiences to be pornography
I was initially exposed to pornography at 8 years old, by finding a disgarded magazine in a hedge. However this was pretty soft.
I was exposed to serious pornography at 10 years by finding a hidden VHS tape in the back of a drawer at a friends house and getting curious. This was hardcore German stuff with explicit violence. This has caused me to have therapy in my lifetime.
This was all in the 80s by the way.
Therefore anything you are mentioning happened long before the internet, and is totally possible in a completely offline world as well. So how do these new digital laws 'protect children' again?
My new comment
We should just ban smartphones, it's where a great deal of the harm comes from and is harder for parents to manage. No need for children to have cameras connected to the internet whether via smartphones or computers.
Device based attestation seems like the way to go largely; it doesn't solve the problem, but it's good enough that it would cover most cases.
Not really. It just pushes the responsibility onto parents, who already have no idea how security works or what their kids are doing on their phones.
I know many will disagree and that is ok. Imo we need global id based on nation states national id. I know that the US doesn't have that, but the rest of the developed world do. I don't want id on porn sites because I don't think that is necessary, but I want bot-free social media, 13+ sharing forums like reddit and I want competitive games where if you are banned you need your brothers id to try cheating again.
From the second paragraph:
> And the only way to prove that you checked is to keep the data indefinitely.
This is not true and made me immediately stop reading. If a social media app uses a third party vendor to do facial/ID age estimation, the vendor can (and in many cases does) only send an estimated age range back to the caller. Some of the more privacy invasive KYC vendors like Persona persist and optionally pass back entire government IDs, but there are other age verifiers (k-ID, PRIVO, among others) who don't. Regulators are happy with apps using these less invasive ones and making a best effort based on an estimated age, and that doesn't require storing any additional PII. We really need to deconflate age verification from KYC to have productive conversations about this stuff. You can do one thing without doing the other.
If you don't keep and cross-reference documents it is really easy to circumvent, e.g. by kids asking their older siblings to sign them up.
I don't think a bulletproof age verification system can be implemented on the server side without serious privacy implications. It would be quite easy to build it on the client side (child mode) but the ones pushing for these systems (usually politicians) don't seem to care about that.
Yep, it is easy to circumvent, and the silver lining of all of this is that regulators don't care. They care that these companies made an effort in guessing.