Discord's face scanning age checks 'start of a bigger shift'

4 months ago (bbc.com)

A long, long time ago (within the past ten years), I had to verify my age with a site. They didn't ask for my ID, or my facial scan, but instead asked for my credit card number. They issued a refund to the card of a few cents, and I had to tell them (within 24hr) how much the refund was for, after which point they'd issue a charge to claw it back. They made it clear that debit and gift cards would not be accepted, it must be a credit card. So I grabbed my Visa card, punched in the numbers, checked my banking app to see the +$0.24 refund, entered the value, got validated, and had another -$0.24 charge to claw it back.

Voila, I was verified as an adult, because I could prove I had a credit card.

The whole point of mandating facial recognition or ID checks isn't to make sure you're an adult, but to keep records of who is consuming those services and tie their identities back to specific profiles. Providers can swear up and down they don't retain that information, but they often use third-parties who may or may not abide by those same requests, especially if the Gov comes knocking with a secret warrant or subpoena.

Biometric validation is surveillance, plain and simple.

  • That was, in fact, what COPA mandated in the US in 1998, and SCOTUS struck it down as too onerous in Ashcroft v. American Civil Liberties Union, kicking off the last 20 years of essentially completely unregulated Internet porn commercially available to children with nothing more than clicking an "I'm 18" button. At the time, filtering was seen as a better solution. Nowadays filtering is basically impossible thanks to TLS (with things like DoH and ECH being deployed to lock that down even further), apps that ignore user CAs and use attestation to lock out owner control, cloud CDNs, TLS fingerprinting, and extreme consolidation of social media (e.g. discord being for both minecraft discussions and furry porn).

    • Despite TLS, filtering is easier to set up now than it was in 1998. You might have to block some apps in the short term, but if you suggest apps can avoid age verification if they stop pinning certificates then they'll jump at the option.

      Consolidation is the only tricky part that's new.

      4 replies →

    • lets just skip straight to the logical conclusion, buddy. no amount of "web" or "discord" regulation stops porn consumption. the statistic of "minors viewing porn" wouldn't be affected even slightly, even if all of the regulation in question here were passed to the fullest extent. this is because people can just download and run whatever software they want, and communicate with any party they want. what you want is for people to not have control over their computers/communications made from them. people talk about a middle ground, but there is none, because you will always just notice that the "minors viewing porn" statistic is not affected by your latest law, until you have absolute control over civilian communications. this is completely against what anyone in the open source community let alone democracy, stand for.

      3 replies →

    • This has already come up before the Supreme Court, with the argument that filtering was a less invasive technique to fulfill the government’s legitimate interests back in the early 2000s.

      That ship has sailed. Even the opposition admits that trying to get everyone to filter is not going to work and is functionally insignificant. The only question is whether age verification is still too onerous.

      38 replies →

  • Is card verification a lesser form of surveillance? And there’s a good chance your card issuer (or your bank, one hop away from it) has your biometrics anyway.

    I don’t like either of them… (And why does YouTube ask me to verify my age when I’m logged into a Google account I created in 2004?)

    • Oh, make no mistake, I hate both of these. I loathe this forced surveillance of everyone because parents can't be bothered to supervise and teach their children about the most primary of human animal functions (sex), regardless of their reasons for it.

      I take great pains to keep minors out of my adult spaces, and don't have to resort to anything as invasive as biometric surveillance or card charges. This notion that the entire world should be safe for children by default, and that anything and everything adult should be vilified and locked up, is toxic as all get-out and builds shame into the human animal over something required for the perpetuation of the species.

      The adult content isn't the problem, it's the relationship some folks have towards it that's the issue. That's best corrected by healthy intervention early on, not arbitrary age checks everywhere online that mainly serve as an exercise of power by the ruling class against "undesirable" elements of society.

      7 replies →

    • > And why does YouTube ask me to verify my age when I’m logged into a Google account I created in 2004?

      Yeah those checks are super annoying. The internet has been around long enough, mechanisms for this should exist.

      And even in the smaller term, if I had to be 13 to make this account, and it has been more than 5 years, maybe relax?

    • > Is card verification a lesser form of surveillance?

      It's not just about which is worse surveillance, it's also simply that everyone has a face but not everyone has a credit card. I'm not deemed creditworthy in this country I moved to (never had a debt in my life but they don't know that) so the card application got rejected. Do we want to upload biometrics or exclude poor and unknown people from "being 18"? I really don't know which is the lesser poison

      > (And why does YouTube ask me to verify my age when I’m logged into a Google account I created in 2004?)

      I'd guess they didn't want to bother with that edge case. Probably <0.01% of active Youtube accounts are >18 years old

      5 replies →

  • What you describe is called QES (Qualified Electronic Signature) and is still widely used to validate identities.

    Unfortunately it is not enough to prove an identity (you could be using the credit card of your traveling uncle) and regulation requires for it to be combined with another proof.

    I see a lot of people associating identity verification with evil intent (advertising, tracking).

    I work in this domain and the reality is a lot less interesting: identity verification companies do this and only this, under strict scrutiny both from their customers and from the regulators.

    We are not where we want to be from a privacy standpoint but the industry is making progress and the usage of identity data is strictly regulated.

  • Paypal used this method as identity (or at least account) verification back in the very early days, IIRC. They made a very small deposit and I think they just let you keep it but I can't recall that for sure.

  • Credit cards are trivially traceable to your legal identity, since anti-money-laundering and know-your-customer laws require that credit card companies keep this information. The government can subpoena this information just as easily as they could with pictures of your face or ID.

    • How do you prove the person typing in the credit card details is the same person who owns the card?

      I know I've read stories of kids taking cards to purchase games or other things online numerous times over the last 20+ years.

  • As we've seen, if the information is retained, it will be used.

    The only safe approach is for that information not to exist in the first place.

  • I had a debit card when I was 13. An absolute godsend during international travel, not having to bother with cash as a forgetful teenager.

    The card providers share your identity in monetary transactions, but I don't think this data does & should include birthdate.

    • These checks accept only a credit card.

      That's useful as one option, but can't be expected of 18 year olds in most countries, and older adults in many.

      1 reply →

  • What if you don't have a credit card? This solves nothing, the good way to do this is as system like Polish "MojeID" (my ID) [1]. This works in the following way, a site needs to verify information X, then it redirects users to a bank (that has to provide this service), login over there and then agree to let the bank know whatever it was requested - it could be only one information, birth date.

    This is a good solution, as banks are obliged to provide free bank account for anyone (there is EU regulation on that), this is very save, gives users full information what data third party requested.

    [1] https://www.kir.pl/nasza-oferta/klient-indywidualny/identyfi...

  • 1. Your old credit card solution needs a credit card. So you exclude out the poor, bad credit, etc.

    2. Parents will help kids bypass checks like that.

    3. It can be bypassed by a half-smart 13-year-old who can access an app on a phone that will give them the card details and be able to see transactions.

    Any verification that doesn't actually verify you via proper means is easy to fake. Hell, we can fake passport/id photos easy enough so now we have to jump on calls with the passport and move it around.

    The days of the wild west of the internet are long gone. It's time to realise that it's so important that it deserves the same level of verification we give to in person activities. Someone seeing you and/or your id. It's the only thing that has the best chances of not being bypassed with ease.

  • They issued a refund to the card of a few cents, and I had to tell them (within 24hr) how much the refund was for, after which point they'd issue a charge to claw it back.

    This was one of the methods that CompuServe used back in the 1980's, though using a checking account.

    It's sad that so many aspects of technology have completely failed to improve in half a century.

  • I don't really get your point, surely a credit card is even more strongly linked to your identify than your face?

  • I basically agree with you, but it's not like you could not be tracked using your credit card number.

    • This is not about tracking, having your biometrics means they can resell the data to other providers (e.g. palantir or some other hellish enterprise). With that, the places and means of following you in real time are practically limitless...

      There have been so many dystopian movies about this kind of tech, it's a good insight of what comes next.

  • > Biometric validation is surveillance, plain and simple.

    Eh. It's just easier and cheaper. I'll bet Discord has outsourced this to one of those services that ask you for a face scan when you sign up to [some other service].

This is never about protecting the children.

This is always about government overreach.

People are less likely to criticize the government, or even participate in political debate, if their online identities are know by the government. Governments like obedient, scared citizens.

The only ethical response to laws like this, is for websites and apps to terminate operations completely in countries that create them. Citizens who elect politicians without respect for human rights and privacy don't really deserve anything nice anyway.

  • Providing identity and access services at scale is certainly a few people's next big plan, and it appears they've managed to sell the representatives of their own states on it first.

    This sort of thing can't happen except through the largest tech companies in the world, who are coincidentally already poised to be the world's official providers of digital identity, and private internet enclaves.

    Look at what Microsoft has done with Windows - mandatory minimum TPM to install and a Microsoft account registration for a local user. Try using an Apple iPad or iPhone without an iCloud account or adding a payment method. Google wants you to sign in with them, everywhere, aggressively. Cloudflare has been the web's own private gatekeeper for the last decade. Facebook's whole product is identity. IBM has sold surveillance, IAM, and facial recognition services for decades.

    Instead of a clunky IP-based Great Firewall, imagine being able to render VPNs ineffective and unnecessary everywhere on the planet by a person's (verified national) identity. Click. Block and deactivate all members of group "Islamic State" on your platform. Click. Allow IDs registered to this ZIP Code to vote in this election. Click. CortanaSupreme, please dashboard viewer metrics by usage patterns that indicate loneliness, filtering for height, last assessed property values, and marriage status, and show their locations.

    Currently, laws don't require age verification, just that ineligible parties are excluded. There's no legal requirement to card someone before selling them alcohol, and there's no reason anyone would need a depth map of someone's face when we could safely assume that the holder of a >5 year old email account is likely to be 18 if 13 is the minimum age to register with the provider.

    Shifting the onus to parents to control what their kids do on the internet hasn't worked. However, that's a bare sliver of what's at stake here.

  • The anonymous, unchecked Internet got us where we are today. It was a great experiment in worldwide communication, but has now been converted into a weapon for the same type of authoritarians that previously used traditional media and propaganda channels. AI is only accelerating the possibilities for abuse. Critical thinking skills taught from a young age is the only defense.

  • That’s a very strange take on governments, treating them as a singular entity. A government that deserves that name is first and foremost and elected set of representatives of the constituents, and thus like citizens that vote for them again, act in their interests.

    If the government is not working like that, you have an administrative problem, not a societal one. A state is its population.

    • > A state is its population.

      Very dangerous thinking. Unless each and every citizens has approved the elected "representative" and every decision they made (which will never happen), you cannot assimilate the state and the population. The state has to be considered a separate entity, one which operate beyond the common man's thinking.

      6 replies →

    • > A state is its population.

      Oh that's not true at all. A state is an institution which is influenced by its population, but if anything, the attitudes of the population are more a product of the state, its constituent political parties, and the associated media apparatuses than of a freestanding "will of the people."

      To give a trivial counterexample, if the American state "is" its population, then why does your presidential vote only matter if you live in a swing state, and why can you only vote for one of two candidates? Surely your vote should reflect all of your policy preferences and have equal influence no matter where you live.

    • > treating them as a singular entity

      The entities that keep pushing for that stuff tends to be quite centralized.

  • > Citizens who elect politicians without respect for human rights and privacy don't really deserve anything nice anyway.

    Unfortunately things don't always work out that cleanly:

    - Sometimes you vote for the pro-freedom candidate, but your candidate loses. - Sometimes there are only two dominant candidates, and both disrespect human rights. - Sometimes one candidate disrespects human rights in some particular way, but the other candidate has different, bigger problems, so you vote for the lesser of two evils. - Sometimes a candidate says one thing while campaigning, and then when elected does something different.

Aside from the privacy nightmare, what about someone who is 18 and just doesn't have the traditional adult facial features? Same thing for someone who's 15 and hit puberty early? I can imagine that on the edges, it becomes really hard to discern.

If they get it wrong, are you locked out? Do you have to send an image of your ID? So many questions. Not a huge fan of these recent UK changes (looking at the Apple E2E situation as well). I understand what they're going for, but I'm not sure this is the best course of action. What do I know though :shrug:.

  • Wise (nee Transferwise) requires a passport style photo taken by a webapp for KYC when transferring money. I was recently unable to complete that process over a dozen tries, because the image processing didn't like something about my face. (Photos met all criteria.)

    On contacting their support, I learned that they refused to use any other process. Also it became apparent that they had outsourced it to some other company and had no insight into the process and so no way to help. Apparently closing one's account will cause an escalation to a team who determines where to send the money, which would presumably put some human flexability back into the process.

    (In the end I was able to get their web app to work by trying several other devices, one had a camera that for whatever reason satisfied their checks that my face was within the required oval etc.)

    • > On contacting their support, I learned that they refused to use any other process.

      I suspect this won't help you, but I think it's worth noting that the GDPR gives people the right to contest any automated decision-making that was made on a solely algorithmic basis. So this wouldn't be legal in the EU (or the UK).

    • Hah, indeed, a similar experience here. The desktop option is worse, trying to get a webcam to focus on an ID card took forever. The next step wanted a 3rd party company to do a live webcam session, no thanks! Closed the account. Or at least tried, after a several step nag process, they still keep the email blocked to that account, in case you change your mind...

      There seems no way to push back against these technologies. Next it will be an AI interview for 'why do you transfer the money?'

  • Also, key point in the framing, when was it decided that Discord supposed to be the one enforcing this? A pop-up saying "you really should be 18+" is one thing, but this sounds like a genuine effort to lock out young people. Neither Discord nor a government ratings agency should be taking final responsibility for how children get bought up, that seems like something parents should be responsible for.

    This is over-reach. Both in the UK and Australia.

    • When a corner shop sells cigarettes to minors, who's breaking the law?

      When a TV channel broadcast porn, who gets fined?

      These are accepted laws that protect kids from "harm", which are relatively uncontroversial.

      Now, the privacy angle is very much the right question. But as Discord are the one that are going to get fined, they totally need to make sure kids aren't being exposed to shit they shouldn't be seeing until they are old enough. In the same way the corner shop needs to make sure they don't sell booze to 16 year olds.

      Now, what is the mechanism that Discord should/could use? that's the bigger question.

      Can government provide fool proof, secure, private and scalable proof of age services? How can private industry do it? (Hint: they wont because its a really good source of profile information for advertising.)

      14 replies →

    • > This is over-reach. Both in the UK and Australia

      2/3 of Australians support minimum age restrictions for social media [1] and it was in-particular popular amongst parents. Putting the responsibility solely on parents shows ignorance of the complexities of how children are growing up these days.

      Many parents have tried to ban social media only for those children to experience ostracisation amongst their peer group leading to poorer educational and social developmental outcomes at a critical time in their live.

      That's why you need governments and platform owners to be heavily involved.

      [1] https://www.theguardian.com/australia-news/article/2024/jun/...

      6 replies →

    • It almost certainly is overreach, but locking young people out of porn is hardly a new concern. We have variants of this argument continuously for decades. I'm not sure there is a definitive answer.

    • There's a SCOTUS case in FSC v. Paxton that could very well decide if age verification is enforced in the US as well so sadly this is just the beginning.

  • It's a good thing to think about. I knew a guy in high school who had male pattern baldness that started at 13 or 14. Full blown by the time he was 16. Dude looked like one of the teachers.

    • Same in my drivers ed at 16, guy had a mans face, large stocky build, and thick full beard. I once was talking to a tall pretty woman who turned out to be a 12 year old girl. And I have a friend who for most of his 20's could pass for 13-14 and had a hell of a time getting into bars.

      This facial thing feel like a loaded attempt to both check a box and get more of that sweet, sweet data to mine. Massive privacy invasion and exploitation of children dressed as security theater.

  • It's not even edge cases - I was a pretty young looking woman and was mistaken for a minor until I was about 24-25. My mother had her first child (me) at 27 and tells me about how she and my father would get dirty looks because they assumed he was some dirty old man that had impregnated a teenager. (He was 3 years older than her).

    I think, ironically, the best way to fight this would be to lean on identity politics: There are probably certain races that ping as older or younger. In addition, trans people who were on puberty blockers are in a situation where they might be 'of age' but not necessarily look like an automated system expects them to, and there might be discrepancies between their face as scanned and the face/information that's show on their ID. Discord has a large trans userbase. Nobody cares about privacy, but people make at least some show of caring about transphobia and racism.

    > So many questions.

    Do they keep a database of facial scans even though they say they don't? If not, what's to stop one older looking friend (or an older sibling/cousin/parent/etc.) from being the 'face' of everyone in a group of minors? Do they have a reliable way to ensure that a face being scanned isn't AI generated (or filtered) itself? What prevents someone from sending in their parent's/sibling's/a stolen ID?

    Seems like security theater more than anything else.

    • I had a colleague, that when going out with her boyfriend, police was called on him as someone believed he is a pedophile.

      She was 26. She just was that young looking.

      :/

    • I don't think they make much of a show of caring about trans rights in the UK right about now, unfortunately. In the US you can make a strong case that a big database of faces and IDs could be really dangerous though I think

      3 replies →

  • I witnessed the Better Off Ted water fountain skit play out in real life once, it was incredible awkward. I was helping my buddy and his black friend and his wife set up accounts on online casinos in Michigan for the promos/refer-a-friend rewards. Some of the sites require the live video facial verification and we were doing it in a darkly lit space at night. It worked instantly and without issue for my friend and me but oh man, many many attempts later and many additional lights needed to get it to work for his friends.

  • The right thing to do here is for Discord to ignore the UK laws and see what happens, IMO.

    Is there a market for leaked facial scans?

    • With the UK currently battling Apple, Discord has no chance of not getting a lawsuit.

      Ofcom is a serious contender in ruling their rules especially where Discord is multi-national that even "normies" know and use.

      And if they got a slap of "we will let you off this time" they would still have to create some sort of verification service to please the next time.

      You might as well piss off your consumers, loose them whatever and still hold the centre stage than fight the case for not. Nothing is stopping Ofcom from launching another lawsuit there after.

      > Is there a market for leaked facial scans?

      There's a market for everything. Fake driver licenses with fake pictures have been around for decades, that would be no different.

  • Devil's advocate: couldn't this be better for privacy than other age checks because it doesn't require actual identification?

  • it doesn't even has to be "un traditional face feature". Hpw are they going to differentiate 18yo from 17y11mo? The latter is not legally adult

  • > what about someone who is 18 and just doesn't have the traditional adult facial features?

    This can be challenging even with humans. My ex got carded when buying alcohol well into her mid thirties, and staff at the schools she taught at mistook her for a student all the time.

    • I grew a beard when I was younger because I was tired of being mistaken for a highschooler its quite annoying to have people assume you are 15 when your 20. still regularly carded in my 30s

  • Didn't Australia ban porn with women who have A cups under the justification of pedos like them?

    Edit: This isn't how it played out. See the comment below.

    • No it's just nonsense you invented because you were unwilling to do any research.

      The actual situation was that the board refused classification where an adult was intentionally pretending to be an underage child not that they looked like one.

      3 replies →

I don't think the problem is that young people are finding porn on the internet. There is a problem, though, and it has to deal with psychological warfare on attention

Formats like shorts or news feeds to you algorithmically with zero lag are the problem. It makes for the zombification of decision making. Endless content breaks people down precisely because it's endless. I think if you add age verification but don't root out the endless nature, you will not really help any young person or adult.

When you look at people with unhealthy content addiction, it is always a case of excess and not necessarily type of content. There are pedophiles but honestly, we have had that throughout all time, with and without the internet. But the endless feeding of the next video robs people of the ability to stop by mentally addiciting them to see just one more. And because content is not really infinite, endless feeds invariably will feed people with porn, eating disorders, and other "crap" in quantities that slowly erode people.

  • There was just another article on HN about how Snapchat is harming children at an industrial scale. Not just giving them access to drugs, violence, pedophiles, and extortionists, but actively connecting them with those accounts. Kids have died from fentanyl or committed suicide from bullying and harassment.

    Porn addiction is bad but it seems there are even worse things happening.

  • [flagged]

    • > 2 straight generations of porn addicts

      Different types of pornography have different dangers and all of it has been broadly available since before the internet.

      > And then you have shit like watchpeopledie.tv.

      I think there's a broad gulf between these activities and I don't think they impact the brain in the same way as pornography. This type of violence can be found in movies and video games which also clearly predate the internet.

      > Children should have been banned from the internet a decade ago

      I'd rather pornography be banned.

      > I'm completely willing to give up some privacy to make it happen.

      Why? It should be incumbent on the people profiting from this activity to police it not on me to give up constitutional rights to protect their margins.

      1 reply →

> The social media company requires users to take a selfie video on their phone and uses AI to estimate the person's age.

What I did not see in this article was anything about how AI can tell a 13 year old from a 12.9 year old with confidence. This seems unlikely to me.

I agree with the article's implication that websites will now want a scan of everyone's faces forever. Their insistence that they won't store the face scans is like one those cute lies that kids tell, and adults aren't fooled by. Either you're outright lying, or you're using the loophole of not storing the image, but rather storing a set of numbers, drived from the image, which act as a unique fingerprint. Or, you're sending it to a third party for storage. Or something like that. But you're definitely keeping track of everyone's faces, don't try to pull a fast one on me young lady, I've been around the block before.

I would like to think there there is a solution that can be engineered, in which a service is able to verify that a user is above an appropriate age threshold, while maintaining privacy safeguards, including, where relevant, for the age-protected service not to be privy to the identity of the user, and for the age verification service to not be privy to the nature of the age-protected service being accessed.

In this day and age, of crypto, and certificates, and sso, and all that gubbins, it's surely only a matter of deciding that this is a problem that needs solving.

(Unless the problem really isn't the age of the user at all, but harvesting information...)

  • Unfortunately, no amount of blockchains and zero-knowledge proofs can compensate for the fact that 15 year old has a 18 year old friend. Or the fact that other 15 year old looks older than some 20 year olds. Or the fact that other 15 year old's dad often leaves his wallet, with his driving license, unattended.

    Over the next five years, you can look forward to a steady trickle of stories in the press about shocked parents finding that somehow their 15 year old passed a one-time over-18 age verification check.

    The fact compliance is nigh-impossible to comply with is intentional - the law is designed that way, because the intent is to deliver a porn ban while sidestepping free speech objections.

    • None of these things are a problem.

      > 15 year old has a 18 year old friend

      Adults can be prosecuted for helping minors circumvent the checks.

      > Or the fact that other 15 year old looks older than some 20 year olds

      See Australian approach. Site can verify you and both government and site don't know who you are. No need for photo.

      > shocked parents finding

      No law is a replacement for bad parenting. But good parenting is easier with the right laws.

      > a one-time over-18 age verification check

      it can happen more than once non intrusively.

      2 replies →

  • Already exists in a lot of places. German national IDs for like 10 years or something like that have an eID feature. It's basically just a public/private key signing scheme. The government and a bunch of other trusted public providers are able to issue identities, you can sign transactions with them or verify your age to commercial service providers, or transfer some data if that's required with your consent. (https://www.personalausweisportal.de/Webs/PA/EN/citizens/ele...)

    Estonia and South Korea I think also have similar features on their IDs, it's already a solved problem.

  • There is a solution and I am the developer:

    https://news.ycombinator.com/item?id=40298552#40298804

    Talking about it or explaining it is like pulling teeth; generally just a thorough misunderstanding of the notion....even though cryptographic certificates make the modern internet possible.

  • Here is my solution:

    Provide easy to use on-device content filtering tools so parents can easily control what their children can access (there are a few ways to do this through law, like requiring it from OS providers or ISPs or just writing these tools directly).

    To make it easy, Discord can provide their services under both adults.discord.com and minors.discord.com so parents can more easily block only the 18+ version of Discord.

    Require personal responsibility from parents to decide what is appropriate for their child.

  • The problem is who pays to maintain the system. There are systems that allow you to share your age anonymously (among other things) and they’re already widely used in Europe but the system knows what you’re using it for since the second party pays for the information, and some accounting info is needed for the billing. It would be completely illegal for the system to use that info for anything else though.

  • > a service is able to verify that a user is above an appropriate age threshold, while maintaining privacy safeguards

    AFAIU, the German electronic ID card ("elektronischer Personalausweis") can do this, but it is not widely implemented, and of course geographically limited.

  • The problem is that it is much easier to implement such a check in a way which lets the verification service link the site to the user, with no discernable difference to the end user

    e: I get the same feeling as I do reading about key escrow schemes in the Clipper chip vein, where nobody claimed it was theoretically impossible to have a "spare key" only accessible by warrant, but the resulting complexity and new threat classes [1] just was not worth it

    [1] https://academiccommons.columbia.edu/doi/10.7916/D8GM8F2W

  • Transferring your age and a way to verify it to any third party is by definition a privacy violation. Doing so in a safe way is literally impossible since I don't want to share that information in the first place.

    • I feel like you could, theoretically, have a service that has an ID (as drivers license ID), perhaps operated by your government, that has an API and a notion of an ephemeral identifier that can be used to provide a digital attestation of some property without exposing that property or the exact identity of the person. It would require that the attestation system is trusted by all parties though, which is I think the core problem.

      4 replies →

    • > Transferring your age and a way to verify it to any third party is by definition a privacy violation.

      No it's not. Unless...

      > Doing so in a safe way is literally impossible since I don't want to share that information in the first place.

      ...well then it is.

      But it's not constructive to claim that proving your age to someone is by definition a privacy violation. If someone wants to prove their age to someone, then that's a private communication that they're entitled to choose to make.

      It is true that if technology to achieve this becomes commonplace, then those not wishing to do so may find it impractical to maintain their privacy in this respect. But that doesn't give others the right to obstruct people who wish to communicate in this way.

  • Crypto comes up every time this topic is discussed but it misses the point.

    The hard part is identifying with reasonable accuracy that the person sitting in front of the device is who they say they are, or a certain age.

    Offloading everything to crypto primitive moves the problem into a different domain where the check is verifying you have access to some crypto primitive, not that it’s actually you or yours.

    Any fully privacy-preserving crypto solution would have the flaw that verifications could be sold online. Someone turns 21 (or other age) and begins selling verifications with their ID because there is no attachment back to them, and therefore no consequences. So people then start imaging extra layers that would protect against this, which start eroding the privacy because you’re returning back to central verification of something.

    • That sounds like a reasonable compromise to me, it's already what happens with ID for pubs etc so I don't think it's much different to the status quo

I'm in the UK and discord has asked me to complete this check (but I haven't, yet). I can still use discord just fine, it just won't let me view any media it considers "adult".

I am an adult but refuse to let them scan my face as a matter of principle, so I've considered using https://github.com/hacksider/Deep-Live-Cam to "deepfake" myself and perform the verification while wearing a fake face. If it works, I'll write about it.

I suspect the endgame of this campaign is to have mandatory ID checks for social media. Police would have access to these upon court orders etc and be able to easily prosecute anyone who posts 'harmful' content online.

  • <tin-foil-hat> ultimately, i think the endgame is to require government ID in order to access internet services in general, a la ender's game. </tin-foil-hat>

  • I'm afraid the endgame is, all this activity tied to real identities will be repeatedly leaked, get used for blackmail, and by foreign intelligence agencies.

    Followed by governments basically shrugging.

  • Which would kill social media. The cherry-picked tech giant iterations anyway.

    • I don't think it would kill social media, but it'd make it more similar to Chinese social media. Essentially impossible to use for protests or criticism of things the government doesn't critiques on.

    • Why? People make social media accounts with their real name and face already. I doubt it would have any effect.

    • It ties real world ultraviolence with social media. It won't kill social media, just make it materially toxic. IIUC South Korea in 2000s had exactly this, online dispute stories coming from there were much worse than anything I had heard locally.

  • See e.g. "Ohio social media parental notification act"

    (mind you, ID/age requirements for access to adult content go way, way back in all countries)

  • They already have access to this.

    If you run a social media site, then you have an API that allows government access to your data.

  • You need to ask what would Trump do. Court order probably skipped, or from a friendly judge.

  • Good!

    Why is the Internet any different than say, a porn or liquor store? Why are we so fuckin allergic to verification? I'll tell ya why- money. Don't pretend it's privacy.

    • there two false equivalencies in your argument, as presented in response to GP:

      1. ID checks are not the same as age verification.

      2. a social media website is not the same as a porn website.

      if you take the stance that social media sites should require ID verification, then i would furthermore point out that this is likely to impact any website that has a space for users to add public feedback, even forums and blogs.

    • How about we don't pretend there's only 1 single facet to this issue, no matter which you think it is?

Like many other people here, I'm wondering what we'll end up having to do at work do deal with this. We don't have the resources to put a full time person on this, and the UK's not a huge market.

For unrelated reasons, we already have to implement geoblocking, and we're also intentionally VPN friendly. I suspect most services are that way, so the easy way out is to add "UK" to the same list as North Korea and Iran.

Anyway, if enough services implement this that way, I'd expect the UK to start repealing laws like this (or to start seeing China-level adoption of VPN services). That limits the blast radius to services actually based in the UK. Those are already dropping like flies, sadly.

I hope the rest of the international tech community applies this sort of pressure. Strength in numbers is about all we have left these days.

  • > I suspect most services are that way

    I don't know actual numbers, but I gave up using VPN by default because in my experience they definitely are not.

  • You'll likely end up paying someone else to do it for you.

    • I'm reasonably sure we will not. Dealing with an integration like that means not shipping some other feature to the rest of the planet. The marginal gain of accepting UK users is lower than the marginal gain of increasing addressable market everywhere else.

    • …as will everyone else. The same company. Who will have all that data in one convenient database just waiting to be leaked.

This feels more like spying on everyone than making the internet safe for kids. Big companies and the government are already tracking what we do online. This just seems like a further reduction of our privacy on the internet.

Parents need to be more involved in what their kids do online, just like in real life. Grounding them isn't enough. We wouldn't let them wander into dangerous places, so we shouldn't let them wander online without adult supervision. Also, parents need to prepare for having tough conversations, like what pornography or gambling is.

Online companies need to really work to make their sites safe for everyone. They should act like they own a mall. If they let bad stuff in (like pornography, scams, gambling), it hurts their reputation, and people will leave.

Instead of banning everything, because some people take pleasure in those activities, maybe there should be separate online spaces for adults who want that kind of content, like how cities have specific areas for adult businesses. This way, it would be easier to restrict children's access to some hardcore stuff.

If we all put some effort into figuring out easy and privacy-friendly solutions to safeguard kids, we can rely on simple principles. For example, if you want to sell toys to kids, you shouldn't sell adult toys under the same roof (same domain) or have posters that can affect young minds.

  • > This feels more like spying on everyone than making the internet safe for kids.

    That’s always been the point. “Protecting children online” is the trojan horse against privacy, and apart from a few of us nerds, everyone is very much in favour of these laws. The fight for privacy is pretty much lost against such a weapon.

    • I hear you on the 'Trojan horse,' but I'm still hopeful! We can vote with our money and maybe even crowdfund platforms that truly respect our privacy and time. We already have paid, privacy-friendly options for things like email and messaging, perhaps a Discord altrenative isn't too far fetched!

Of all the terrible, dumb-headed ideas. I would not want my kids scanning their face into who-knows-what third party's service.

I already decline this technology when finance companies want to use it for eg. KYC verification ("Sorry, I don't own a smartphone compatible with your tool. If you want my business you'll have to find another way. Happy to provide a notarized declaration if you'd like" has worked in the past).

  • [flagged]

    • This response is a textbook example of a manipulative false dichotomy that poisons legitimate discourse about child safety online.

      Presenting the only options as either "scan your child's biometric data into opaque systems" or "let your child be groomed and/or get addicted to porn" is intellectually dishonest and deliberately inflammatory. It's a rhetorical trap designed to shame parents with valid privacy concerns into compliance.

      Privacy rights and child protection are not mutually exclusive. Numerous approaches exist that don't require harvesting biometric data from minors, from improved content filtering and educational initiatives to parental controls and account verification methods that don't rely on facial scanning. Corporations are simply implementing the most convenient (for them) solution that technically satisfies regulatory requirements while creating new data streams they can potentially monetize.

      What's actually happening here is deeply troubling: we're normalizing the idea that children must surrender their biometric data as the price of digital participation. This creates permanent digital identifiers that could follow them throughout their lives, with their data stored in systems with questionable security, unclear retention policies, and potential for future misuse.

      Weaponizing the fear of child exploitation to silence legitimate concerns about corporate overreach isn't just manipulative - it's morally reprehensible. Framing opposition to biometric surveillance as being pro-exploitation deliberately poisons the well against anyone who questions these systems.

      We can and must develop approaches that protect children without surrendering their fundamental privacy rights. Pretending these are our only two options isn't just wrong - it actively undermines the nuanced conversation we should be having about both child safety and digital rights.

      1 reply →

Frankly I'm scared by governments and corporations going "papers, please" for people to be allowed to access the Internet. On top of endangering privacy by tying pseudonymous online interactions to real-life ID and biometrics, attempts to block under-18 people from finding information or interacting online will only amplify how society regards them as not having rights. This will isolate people (especially gay and trans teens) living with abusive parents from finding support networks, and prevent them from learning (by talking to friends in different situations) that being beaten or emotionally put down by parents is abusive and traumatizing.

I know all too well that when you grow up you're psychologically wired to assume that the way the parents treated you is normal, and if they harmed you then you deserve to be hurt. I've made friends with and assisted many teens and young adults in unsafe living situations (and talked to people who grew up in fundamentalist religions and cults), and they're dependent on online support networks to recognize and cope with abuse, get advice, and seek help in dangerous situations.

  • To add to this, some people might be left out because companies are not financially incentivised to verify them.

    In Germany, immigrants struggle to open a bank account because the banks require documents that they don't have (and that they can hardly get with a bank account). Russian, Iranian and Syrian citizens have a particularly hard time finding a bank that works for them. The most common video document verification system does not support some Indian passports, among others.

    To banks, leaving these people out is a rational business decision. The same thing will happen to those deemed too risky or too much hassle by the internet's gatekeepers, but at a much bigger scale.

    • What is it about some Indian passports? Do they need to have a biometric chip to work? (just checked, and those were introduced in 2024)

      Banks worldwide regularly refuse service to people who have US citizenship, so I don't think you're far off on that point.

      1 reply →

  • > prevent them from learning (by talking to friends in different situations) that being beaten or emotionally put down by parents is abusive and traumatizing.

    parents didn't know I'm gay, but they did control all flow of information (before social media) by controlling all movements outside school.

    it took me until my thirties to realise how deeply abusive my childhood was. the only hints I had, in hindsight, was the first Christmas at uni, everybody was excited to go home and I couldn't fathom why on earth anybody would want to. I dismissed it as an oddity at the time.

It's interesting how the "features" which many claim IRC is missing turn out to be a huge liability. Adult content is applied via image hosting, video/audio chat, etc. All things IRC lacks.

  • There is a definitely a textual privilege in media. You can write things in books that would never be allowed to be depicted in video. Even in Game of Thrones, Ramsay's sadism had to be sanitised a little for live action.

    This is doubly so if your book is historic in some sense. Still find it crazy that Marquis de Sade's stuff is legal.

  • > All things IRC lacks.

    IRC gives you all the features of a normal client but you've got to create them yourself which itself is a dark-art that's been squandered by today's gimmicky services.

    Just because it doesn't have a fancy UI to present the media doesn't mean it can't.

    Encode to base64 and post in channel. Decode it back to normal format... IRC is excellent for large amounts of stringed text.

    You could even stream the movie in base64 and have a client that captures the data stream and decodes.

    The only thing that IRC lacks is a feature to recall conversations where if someone isn't present. But if you're someone who needs that host a bouncer or something.

    I personally enjoy entering a blank slate.

I think regulation could be done better...

Let's assign one or ideally two adults to each underage child, who are aware of the childs real age and can intervene and prevent the child from installing discord (and any other social media) in the first place or confiscate the equipment if the child breaks the rules. They could also regulate many other thing in the childs life, not just social network use.

  • > confiscate the equipment if the child breaks the rules.

    Even you acknowledge this plan is flawed and that the child can break the rules. And it's not that difficult. After all, confiscating the equipment assumes that they know about the equipment and that they can legally seize the equipment. Third parties are involved, and doing what you suggests would land these adults in prison.

    I know you thought you were being smart with your suggestion that maybe parents should be parents, but really you just highlighted your ignorance.

    The goal of these laws are to prevent children from accessing content. If some adults get caught in the crossfire, they don't care.

    Now, I'm not defending these laws or saying anything about them. What I am saying is that your "suggestion" is flawed from the point of view of those proposing these laws.

    • These are not 20 something college students with jobs and rented apartments, doing stuff without their parents knowing.

      These are kids younger than 13, they don't have jobs, they live with their parents, no internet/data planes outside of control of their parents, no nothing.

      The goal of these laws is to get ID checks on social networks for everyone, so the governments know who the "loud ones" (against whatever political cause) are. Using small kids as a reason to do so is a typical modus operandi to achieve that.

      Yes, those "one or two adults" I meantioned should be the parents, and yes, parents can legally confiscate their kids phones if they're doing something stupid online. They can also check what the kid is doing online.

      If a 12yo kid (or younger) can somehow obtain money and a phone and keep it hidden from their parents, that kid will also be able to avoid such checks by vpn-ing (or using a proxy) to some non-UK country, where those checks won't be mandatory. This again is solved by the parents actually parenting, again... it's kids younger than 13, at that age, parents can and should have total control of their child.

      11 replies →

So, what will be the proper technology to apply here? I have no problem with verification of my age (not the date of birth, just the boolean, >18yo), but I do have a problem with sending any party a picture of my face or my passport.

  • Discord got me to do this about 2 weeks ago (I'm Australian so they seem to be rolling this out here too), at least for the face scan the privacy policy said it occurred on device, so if you believe that you're not sending anyone images of your face.

    • Fascinating. If it really isn't sending the face images, spoofing the verification could be as simple as returning a boolean to some API.

    • we don't store your face [just the unique biometric metadata weights]. a computer doesn't need a picture to identify you, just store the numbers and you can legally claim you aren't "storing the picture".

  • Maybe someone like apple will make a "verify user looks over 18" neural net model they can run in the secure enclave of iphones, which sends some kind of "age verified by apple" token to websites without disclosing your identity outside your own device?

    Having said that, I bet such a mechanism will prove easy to fake (if only by pointing the phone at grandad), and therefore be disallowed by governments in short order in favour of something that doesn't protect the user as much.

    • Apple lets you add IDs to your wallet in some jurisdictions. I wouldn't be surprised if they eventually introduce a system-wide age verification service and let developers piggyback on it with safe, privacy-preserving assertions.

  • This is a social problem and as such cannot be solved with technology. You would have to make social media so uncool that young people didn't use it. One of the easiest ways of doing this is associating it with old people. Therefore the fastest way to get young people off discord is to get geriatric on discord and en-mass.

    • Underage drinking is a social problem.

      The issue isn't social media is bad, the issue is that social media has no effective moderation. If an adult is hanging out at the park talking to minors, thats easy to spot and correct. there is a strong social pressure to not let that happen.

      The problem is when moving to chat, not only is a mobile private to the child, there are no safe mechanisms to allow parents to "spot the nonce". Moreover the kid has no real way of knowing they are adults until it's too late.

      Its a difficult problem, doing nothing is going to ruin a generation (or already has), doing it half arsed is going to undermine privacy and not solve the problem.

  • OIDC4VCI(OpenID for Verifiable Credential Issuance)[0] is what I think has the most promise.

    My understanding is that an issuer can issue a Credential that asserts the claims (eg, you are over 18) that you make to another entity/website and that entity can verify those claims you present to them (Verifiable Credentials).

    For example, if we can get banks - who already know our full identity - to become Credential Issuers, then we can use bank provided Credentials (that assert we are over 18) to present to websites and services that require age verification WITHOUT having to give them all of our personal information. As long the site or service trust that Issuer.

    [0] https://openid.net/specs/openid-4-verifiable-credential-issu...

  • Variation of PassKeys could work well.

    Especially if it was tightly integrated into the OS so that parents could issue an AgeKey to each of their children which sites would ask for.

How do we fight back against this? I don't want my face scanned on a smartphone to use goods and services. Kyc checks for banks are bad enough.

I miss the internet of the early 2000s.

  • This is probably difficult, but if everyone collectively didn't use services that had such egregious requirements it would likely die quickly. The companies being requires to do such things would have to start pushing back against government policies that are killing their business.

    Considering everyone currently and without a second thought lets Apple scan their face just for the convenience of unlocking their phone I think this is a lost cause.

  • I don't think there are any easy answers to the question of how to respond to this but you might consider:

    - voting with your feet

    - contacting your elected representatives

    - contacting media outlets

    - becoming a member or donor of civil liberties campaigns

    - listening to people who don't yet get it and trying to ensure that they can switch to your view without losing face

Relevant news article from yesterday:

https://www.wired.com/story/new-jersey-sues-discord/

> Platkin says there were two catalysts for the investigation. One is personal: A few years ago, a family friend came to Platkin, astonished that his 10-year-old son was able to sign up for Discord, despite the platform forbidding children under 13 from registering.

> The second was the mass-shooting in Buffalo, in neighboring New York. The perpetrator used Discord as his personal diary in the lead-up to the attack.

In other words, this is yet another attack on privacy in the name of "protecting the children".

This is how you lose your comfortable market monopoly like Skype did. Recall that Skype had better P2P tech than Discord did and would still be the market leader if MS had chosen to update anything at all besides the logo bi-yearly.

Regulators would never comprehend internet. They are making it look like they have no idea that on the internet you can: move to another country without visa in 2 minutes, change your face, voice, fingerprints to whatever you like. Get any passport, any document you want to mock any KYC or impersonate anyone without a trace, all within 10$ range.

Sure, companies have no option but to implement funny policies like these, and I'm sure any kid is much smarter than the government, so he will feel good circumventing it.

Maybe the start of a bigger shift to another platform. I'd wager a large portion of the Discord user-base is underage, and they've got nothing but time.

why wouldn't an identity/age verification scheme that blinds both sides work?

e.g. a site wants to have some proof of identity. it generates a token and sends the user with it to a government service. the service verifies the user's identity, signs the token and sends the user back.

now the site knows that the government service has verified the identity (and relevant characteristics, like age threshold), but doesn't know the identity. the government service obviously knows the user but doesn't know the online account tied to the identity. this can be further separated by using a middleman authentication provider, so that even the site identity itself doesn't reach the government.

am i missing something obvious why that wouldn't work?

This will definitely just apply to social media and the situation won't be abused by other companies even if they have no legal requirement, absolutely not, no sir.

It's worth noting that Matt Navarra, the sole source of "this is part of a bigger shift", is an ex member of the UK government who worked in the PM's office and worked for the BBC.

This story is a tempest in a teacup. The administration found someone to spread this nonsense so every later goes "well that was inevitable, the BBC predicted it would be."

Yeah, and bank robbers can predict that a bank is going to have less cash after a certain day.

This obsession the British have with kids online is so tiresome. You want to stop child sexual assault? Maybe do something about your royalty flying to island getaways organized by a human trafficker and ultra-high-end pimp for underage kids? Or do something about your clergy diddling kids?

Maybe the reason the UK government thinks this is such a big issue is because these legislators and officials are so surrounded by people who do it...because politicians are right there next to clergy in terms of this stuff.

I am getting sick and tired of the thinly veiled excuse of "we need to strip away more of your privacy in order to protect the childen" we all know they are doing it because they want to surveil/track you more easily.

and for those that think they are actually doing this to protect the children and you are concerned about what your children sees online this might sound a bit harsh but why dont you actually parent. Stop giving your kids unlimited access you tablets/computers etc. back in my day there was the option of having a single computer for the child in a public room that could not be moved. you could create whitelist only sites nowadays very easily even for laymens.

i understand it is a bit harder nowdays because more parents are both working to support the family but i rather not loose what little privacy we have left as a society because its requires more work for you to parent

  • I've made this suggestion in past discussions on this topic.

    Users should be anonymous.

    Sites should verify that user is over 18 using a government web service.

  • The fact that you think that anything you suggested would prevent or hinder a child from seeing things you don't want them to see online, or that it would affect a child's ability to be affected by what is online is indicative of the bigger problem. To put it into terms you might understand, you are storing your passwords in plaintext and the traffic isn't encrypted.

    Basically, you're ignorant.

    This isn't to say that the laws that the majority put into place are good. I'm not speaking on that. You are, in this situation, that layman, who cannot solve the problem you are claiming you want to solve.

    • you could never solve the problem of completely preventing a child or anyone from that matter from doing something if they are determined enough and you are naive in thinking so. should we remove all forms of encryption because pedophiles/terrorists use it?

      if a child is determined to see naughty things online theyll just find a website that doesnt care about facial recognition laws while our privacy is still stripped away even more. So we lose more privacy as a society and kids still see what they desire just taking them a few minutes/ at most hours longer.

      once again being a parent and paying attention to what your child is doing online or just talking to them about it. you choose to be a parent and that does require work to do a good job at it

    • > To put it into terms you might understand, you are storing your passwords in plaintext and the traffic isn't encrypted.

      What?

I'm on several UK-based soccer message boards and none of this seems to be required there. The forums are running on Xenforo or PhpBB, self-hosted by the admin. Some of those forums have thousands of user accounts registered.

Is Discord considered to be different as it's a centralized aggregator platform like Reddit, vs a standalone thing like a message board?

  • I haven't read the article. They aren't coming for Evoweb next are they? The UK is looking screwed man... something bad is cooking there...

Identity verification remains unsolved and likely will remain that way. Any attempts at improvement are authoritarian. And the status quo leave massive room for circumvention.

Personally, I grew up in an era before there was any expectation of validation, and enjoyed the anonymity of message boards and forums. But when people are posting blatantly illegal activity online, I can see the appeal for more validation. Just makes me sad.

  • Which makes one wonder how much of the illegal activities are by people who really are interested in engaging in that illegal activity and how much of it is from those who see it as a means to destroy anonymity online.

Discord is a walled garden. Sucks how popular it is for communities which used to be free and indexable on the web.

A book recommendation on the topic:

> This is the first book to examine the growth and phenomenon of a securitized and criminalized compliance society which relies increasingly on intelligence-led and predictive technologies to control future risks, crimes, and security threats. It articulates the emergence of a ‘compliance-industrial complex’ that synthesizes regulatory capitalism and surveillance capitalism to impose new regimes of power and control, as well as new forms of subjectivity subservient to the ‘operating system’ of a pre-crime society.

https://www.amazon.com/Compliance-Industrial-Complex-Operati...

The U.S., at least, needs a national ID. That, and a verification system for businesses to use, would solve so many of these issues.

Now we have a good use-case for diffusion-based image generation: bypass these insanely privacy-invasive requirements.

This is gong to do a real number on YouTube drama documentary channels.

Where are you gonna get your content if the lolcows can't creep on minors on Discord anymore?

I mean, in theory, they could find ways to circumvent it, but if they were that smart, they wouldn't be the subject of YouTube drama documentaries.

Nope. There's better ways to check your over 18; credit cards have been mentioned above. If a platform I'm using attempts this, with no other option, I'll delete my account and data on that platform.

VPNs are really a requirement for UK residents now.

They will do just enough so that they comply with the law while kids will be able to easily bypass it.

Where there is a will, there is mean and teenager looking for porn... That's a big willpower.

Maybe now open-source projects will get off of Discord for their official chat/support?

discord is shit, poorly designed software, with all the most obnoxious poor security decisions (like requiring phone calls), with poor political decisions on top as well as spying. probably one of the worst pieces of software to ever exist. all it has is momentum. it's like pop music where a million people make bands and one wins the fame lottery.

it's primarily a windows program and they can't even make a proper windows gui but embed a website, so clicking on anything is like a link instead of focusing into it. for instance if you middle click someone's name it opens it in a browser. fuck off. pressing alt+f4 closes discord instead of sending it to the tray (despite being a tray program). it's always updating something and then it just says "logging in" instead of saying what it's doing. it gets stuck indefinitely if you log in on a slow connection or you unplug lan while it's logging in or doing whatever it's doing at any given moment. absolutely the most frustrating crap to use. it has a billion options for stupid "hardcore" gamers (i am, too, but i don't need it) with special needs while not being a basic quality application that conforms to any UI standard.

they openly spy on you, not even trying to hide it.

instead of real software, it's a stupid fucking social media "community", so you can't just use it as a MECHANISM NOT POLICY program, instead every time you do something like log into a different account you have to check whether this is morally correct or will somehow harm their "community". like say i want to work on my blockchain, who are dumb enough to use discord as their main communication platform. i obviously then would want one account for that, then another - completely separate (but perhaps with the same phone number to make it logistically easier which SHOULDNT EVEN BE A THING, this is the internet) - account for playing games (often during work), i can't just log into these simultaneously, i have to go check what their policy is on that. literally, my first thought is that like typical incompetent american software devs, they will think i'm trying to scam people or some other kind of "abuse". and of course, they appear to have conceded to partially implement this "feature" (by undoing their nonsense about forensically attempting to forbid this)

They’re using the databases to go after illegal immigrants right now. Soon it’ll be using the porn databases to go after Gay people. They’re trying to use the healthcare databases to go after Trans people. All this verification is nothing but a way to commit genocide against minorities. Porn is so far down on the list of harmful things. There’s no pearl clutching over alcohol and other drugs like Americans have with porn. Nation of pansies.

I see a lot of comments here arguing age requirements are overreach and these decisions should be left to the parents. To those presenting such arguments, do you think that applies to other activities as well? What about smoking/drinking/firearms? Pornography? Driving?

I haven't researched the topic of social media's effect on young people, but the common sentiment I encounter is that it's generally harmful, or at least capable of harm in a way that is difficult to isolate and manage as a parent.

The people closest to this issue, that is parents, school faculty, and those who study the psychology and health of children/teens, seem to be the most alarmed about the effects of social media.

If that's true, I can understand the need to, as a society, agree we would like to implement some barrier between kids/teens and the social media companies. How that is practically done seems to be the challenge. Clicking a box that say's, in effect, "I totally promise I am old enough." is completely useless for anything other than a thin legal shield.

  • >I see a lot of comments here arguing age requirements are overreach and these decisions should be left to the parents. To those presenting such arguments, do you think that applies to other activities as well? What about smoking/drinking/firearms? Pornography? Driving?

    Yes. The state has far, far too much involvement in everybody's lives.

    • This is a great stance to have if consequences have zero value.

      Every time we shrug and say "let the parents decide," we gamble with the most vulnerable: the kids who don’t yet know how to refuse a cigarette, who don’t yet grasp the weight of a loaded weapon, who don’t yet understand that porn isn’t a harmless curiosity. We gamble with the soul of childhood—and when we lose, those children don’t get a second chance. They leave behind empty chairs at dinner tables, empty beds in houses that echo with what might have been. That’s the true cost of unfettered "parental freedom," and it’s a price that's easy to pay with someone else's life. But hey, Fuck those kids, right?

      5 replies →

  • The difference is that requiring ID for those activities doesn't generally drastically erode the privacy of other people.

    Instead of destroying the concept of privacy and anonymity on the Internet... how about we just stop these companies from being as harmful as they are, regardless of your age?

  • > I see a lot of comments here arguing age requirements are overreach and these decisions should be left to the parents. To those presenting such arguments, do you think that applies to other activities as well? What about smoking/drinking/firearms? Pornography? Driving?

    My gut feel here mostly has to do with how I view the activity overall. Smoking I see as a social ill that both adults and children would be better off without, so I don't particularly mind an ID check that inconveniences adults, and that can be opted-out from by simply not smoking. (Social media I see as pretty akin to smoking.)

    Inconveniencing adults with ID checks is probably not actually a good way to create incentives though.

    (Driving is a special case due to negative externalities and danger you cause to others.)

    • > My gut feel here mostly has to do with how I view the activity overall. Smoking I see as a social ill that both adults and children would be better off without, so I don't particularly mind an ID check that inconveniences adults, and that can be opted-out from by simply not smoking. (Social media I see as pretty akin to smoking.)

      The big difference for me is, the person looking at my ID at the gas station isn't storing all the data on it in some database, which may or may not be properly secured.

      If age verification can be done ephemerally, then I think it's largely a non-issue. But of course it won't, you'll have to submit some combo of personal info + a photo or face scan, and that information will be stored by any number of third parties, probably permanently, only to end up in the next data breach.

      There's also an issue of anonymity, which is increasingly under attack on the web. Even in the gas station example, while I'm not truly anonymous when I buy alcohol, the gas station attendant likely isn't going to remember me or my name, and it's certainly not being stored along side an entire customer profile.

      For services on the web, we need a similar level of privacy with the age verification, otherwise it's not just age verification it's identity verification as well (and by extension, the tying of all of your activity on that service directly to you) which I do have a big problem with.

      If we want age verification online, we have to have a way to do it ephemerally and psuedo-anonymously.

  • > I see a lot of comments here arguing age requirements are overreach and these decisions should be left to the parents.

    No you don't. The bulk of the comments at this point in time don't mention things being left to parents at all.

  • Clicking a box gives person a chance to decide whether they want to enter a website or not, without getting exposed to it immediately. It's not useless.

    It also povides no useful information to the website operator, which is good. If the info is useful, it will be logged.

    If it is logged, well, I've seen what morally derailed hightech state will do with any and all data they can get hold off. They'll put it all in a giant AI lottery machine to generate and "justify" targets for their genocide, to kill and burn entire families at once. It's happening now elsewhere in the world.

    What should be scary to everyone is that it's being justified or at best ignored by supposedly morally "normal" western states (like mine) which are not engaged directly in such behavior, yet.

    I do not trust "elites", who are able to ignore or justify this being done elsewhere, with making traceable any of my behavioral data directly to me, by forced provision of identity to services that don't need any for their function.

  • > I see a lot of comments here arguing age requirements are overreach and these decisions should be left to the parents. To those presenting such arguments, do you think that applies to other activities as well? What about smoking/drinking/firearms? Pornography? Driving?

    All of the things on your list are primarily enforced by parents already.

    This law is regulatory capture that's going to strengthen the monopolies of the exact social media sites that you allude to. It makes it harder for smaller, focused sites to exist. Instead the only option will be sites with algorithmic feeds that currently push right-wing nazi propaganda, anti-vaxxers, flat earthers, nihilist school shooting clubs for teenagers, or whatever fresh hell the internet came up with this morning.

    If you think age verification is going to fix these problems on the big sites, I suggest watching YouTube Kids. Actually, don't. I wouldn't wish that trauma on anyone. Seriously.

  • > To those presenting such arguments, do you think that applies to other activities as well?

    You’re acting like it’s not normal for parents to decide which activities a child can do, cannot do, and must do, and to make these decisions with appropriate ages in mind. I tend to lean towards allowing parents a long leash in their own home and other private places but to regulate behavior in schools and public places.