← Back to context

Comment by Aurornis

1 month ago

Many will cheer for any case that hurts Meta without reading the details, but we should be aware that these cases are one of the key reasons why companies are backtracking from features like end-to-end encryption:

> The New Mexico case also raised concerns that allowing teens to use end-to-end encryption on Instagram chats — a privacy measure that blocks anyone other than sender and receiver from viewing a conversation — could make it harder for law enforcement to catch predators. Midway through trial, Meta said it would stop supporting end-to-end-encrypted messaging on Instagram later this year.

The New York case has explicitly gone after their support of end-to-end encryption as a target: https://www.reuters.com/legal/government/meta-executive-warn...

The correct nuance here is...

* Classifying accounts as child accounts (moderated by a parent)

* Allowing account moderators to review content in the account that is moderated (including assigning other moderation tools of choice)

In call cases transparency and enabling consumer choice should be the core focus.

Additionally: by default treat everyone online as an adult. Parents that allow their kids online like that without supervision / some setting that the user agent is operated by a child intend to allow their children to interact with strangers. This tends to work out better in more controlled and limited circumstances where the adults involved have the resources to provide suitable supervision.

At the same time, any requirements should apply only to commercial products. Community (gratis / not for profit) efforts presumably reflect the needs of a given community.

  • I think this is the way. Not control, but just make it simpler for parents to handle their childrens devices. You dont have to make everyone share their age, you just make it so that parents can in a simpler way choose what the children should be able to access. Make it easy to do right, dont add more control. Its kind of the old anti-piracy copyprotections. The pirates always cracked it, and in the end, the ones who got to sit there trying to figure out what is the word in the manual is the user who actually paid for the game. So making it worse for the ones who paid, and better for the cracked version. So, make it simple.

    • Children are not just the responsibility of parents it is the job of society-through government- to protect and support them.

      So no we can't just tell parents to deal with it.

      10 replies →

  • I think getting the age thing correct is key to get parental classification to work properly(I think now platforms just ask for a birth date which is lame) e.g

    > Surveys by Britain’s tech regulator, Ofcom, find that among children aged 10-12, over half use Snapchat, more than 60% TikTok and more than 70% WhatsApp. All three apps have a notional minimum age of 13: https://archive.ph/y3pQO

    Once you get the classification correct — and AI cannot it do this — only via community ombudsman/age verifiers, in a privacy first way*, the app stores can easily tell the app devs what accounts are sensitive and filtering should be much more effective.

    *Basically once your age is verified by a real human for your device(using device local encryption to verify biometrics) you are set. No kid should be able to bypass and install apps it on devices that their parents hand to them. There will always be black market devices with these apps, but there are ways of beating those to be very minimal by existing tech.

    • > only via community ombudsman/age verifiers

      Why do you need any third parties whatsoever? Just have the parents do it. They configure a setting in the kid's device which the device uses to determine what content to display. All you need from the app/service is a rating for the content. No third parties should never have to know anything about the user, because the user's device knows that, and the device knows it because the parents do.

    • This all depends on fantasy tech and/or totalitarian control of tech.

      Who verifies that the person verifying the child's age is actually authorised to do that? Who verifies that verification? And so on up. This needs a chain of trust that can only end up at government. And that chain of trust will then be open to being abused by shitty politicians.

      What mechanism in (e.g) Linux is responsible for implementing this age verification so that it cannot be tampered with (or trivially overruled by a sudo call)? Which organisation is legally liable if that mechanism doesn't do its job? How can we stop someone from overwriting that mechanism with their own, in an open OS that is deliberately designed to allow anyone with root to change anything on it?

      What you propose here is the death of open computing. And I personally believe that we would be much better off as a species if we kept open computing and just taught our kids how to handle social media better.

      5 replies →

  • It’s very hard to control kids internet access. Impossible really. Even if you do it fine at home, once they go to school it’s whatever policies the school has. Most require laptops and provide internet access.

    • > it’s whatever policies the school has.

      so the school takes on that responsibility, as deputized by the parents.

      Kids don't get unfettered access to the streets while at school. They can't take their bikes and ride out at will. What makes the internet and devices any different? The devices provided by the school should be lockdown-able, and kids should not be provided their own device unless there's a parental lock (which is enabled during school hours, and is similarly locked down).

      4 replies →

    • I feel uncomfortable about the idea of controlling children, even my own. Certainly there is a requirement to protect children from others but I feel like putting in guard rails to prevent children from themselves only leads to making things taboo and, as a result, more interesting.

  • > Classifying accounts as child accounts (moderated by a parent)

    Notice also that even if you do this, you still don't need the service to be able to decrypt the content, only the parent.

    This could even be generically useful, e.g. you have a messenger used by business and then the messages can be read by the client company's administrator/manager but not the messaging company's.

  • I don’t agree we should Treat everyone as an adult by default online. We wouldn’t do that in any other circumstances.

    • And the only reason it is permissible to presumptively treat people as underage until proven otherwise in the physical world is that there isn't a constellation of intermediaries collecting all your habits and preferences when you buy porno magazines or alcohol in person.

      Why is the answer people seem to arrive at being "mandatory collection of blackmail material that will ruin careers and relationships" when it comes to the Internet?

      3 replies →

  • > Classifying accounts as child accounts

    It's ok to drive Dad's truck unless he catches you and tells you no.

    • Unfair presentation. What they suggested was more akin to, "Assume someone with keys is an adult, and let them start the truck."

      Dad should either know his children would never drive the truck without permission, or keep his keys as safe as his wallet (and if he can't trust his kids with keys, you bet his wallet needs protection).

  • That doesn't work, unless the system knows everyone's family relationships.

    Not guesses. Not is told about and takes on trust. Knows.

    There's nothing to stop a kid creating a fake adult account and using it as an adult, perhaps creating their own kid account for "official" use.

    Ultimately this is an unsolvable problem without a single source of truth for verified ID and user age.

    The only responsible way to do that is to create a global "ID escrow" agency, where ID details are private and aren't available to governments or corporations without a court order, but the agency can provide basic age checks and other privacy services of a limited nature.

    Good luck with that idea in this culture.

    Meanwhile we have the opposite - real ID is known to governments and corporations, personal habits and beliefs of all kinds can be tracked, there is zero expectation of privacy, and kids still aren't protected.

I’m actually okay with not letting under age people use e2e. I’m not okay with blocking everyone. I have 2 kids.

  • I'm not comfortable with the idea that children's private messages would be exposed to thousands of social media workers and government employees.

  • I understand the concern but then to make this available for adults you now have to provide proof of age to companies, which opens up another can of privacy worms.

    • Theoretically we don't actually need proof of age. Websites need to know when the user is attempting to create an account or log in from a child-locked device. Parents need to make sure their kids only have child-locked devices. Vendors need to make sure they don't sell unlocked devices to kids.

      20 replies →

  • In a way, this is like saying that one trusts total strangers in some random large tech company and total strangers in government agencies to read and/or manipulate conversations that kids have. This also paves the way to disallow E2EE for other classes of people based on arbitrary criteria. I don’t believe this is good for society overall.

    • The reason we are having this discussion, is because the private route worked up to a point.

      Firms have a fiduciary duty to shareholders and profit.

      On the other hand, You ultimately decide the rules and goals that operate government organizations, and do not have a profit maximization target.

      They aren’t the same tool, and they work for different situations.

      The E2EE slippery slope is a different challenge, and for that I have no thoughts

  • The problem is all these ‘for the children’ arguments contain collateral damage.

    • Well, the problem is that the “don’t do it” arguments have children as the collateral damage.

      We are at a point where we are picking and choosing collateral damage targets.

  • You just need to provide the government with your name and address and the name and address of the counter party every time you send an encrypted message.

    If you don't support this you're obviously a pedo nazi terrorist.

    • Meta is one of the worst offenders here. They are actively lobbying at least the US Congress for laws that require age verification at the hardware/os level.

  • There is no reason kids should use so called smart devices, except making certain companies richer. Kids have had a healthy development without such crap for thousands of years. We don't discuss what percentage of alcohol should be allowed in beer and wine for kids.

Centralized organizations with proprietary software can never offer meaningful end to end encryption because they can just ship an app update to disable or backdoor it at any time.

It is better for them to be forced to turn off the security theater so people that need actual privacy can research alternatives.

  • well, name an example of a thing that can never change then.

    "research alternatives" meaning what exactly? You think open source is somehow not susceptible to the same issue, plus all of the malicious updates?

    • Security focused FOSS does signed commits, signed reviews, full source bootstrapping, and reproducible builds.

      Proprietary software solutions are unable come close to that level of accountability.

      Not all published source code is secure but all secure software has published source code.

This is the core issue.

We know that this isn't really going to reduce harm for children, we know Meta is not seriously going to suffer or change, and we know this is going to be used as a cudgel to beat down privacy and increase surveillance.

  • Why is it so important that kids have access to the internet anyway that we're willing to sacrifice both our privacy and freedom of speech rights for it when we already know it's damaging their mental health?

    We don't need all this privacy invasion if we just didn't give kids a smartphone with a data plan.

Rock meet hard place?

Harm to kids is actually happening, and this is always going to be a hot button topic.

E2E is critical for our current ability to communicate online, but will be a lower priority when pitted against child safety.

Fighting the good fight is one thing, fighting for the sake of it, without a plan that addresses the tactical reality is another altogether.

Personally, I think E2E will be defended, but it’s becoming a lightning rod for attention. As if removing encryption will solve the emerging issues.

I suspect providing alternatives to champion, such as privacy preserving ways to verify age, will force a conversation on why E2E needs to go.

> Many will cheer for any case that hurts Meta

Absolutely. Particularly where they've been found to be guilty.

> but we should be aware that these cases are one of the key reasons why companies are backtracking from features like end-to-end encryption

Why _social media_ companies are backtracking. I'm extremely nonplussed by this outcome.

> concerns that allowing teens

Yes, because that's what we all had in mind when considering the victims and perpetrators of these crimes.

The lawyers using the finding badly internally doesn’t mean the finding was fundamentally unsound and or won’t ultimately be a positive thing.

This is a good thing for “social” media. If you use any social media app (especially those owned by Meta) you should assume that absolutely everything you do is for full public consumption. Maybe these changes will make everyone stop thinking that anything is private when using “social” media apps.

It's illegal to hand a minor harmful material. Meta did exactly that. I support people's rights to make and buy sports cars, But it is illegal to hand the keys to a minor and leave them unsupervised.

The Clipper chip is coming back.

  • How is the Clipper chip different from what online platforms claim have: a curated kids only section?

    • In the mid-90s the US government proposed that Clipper be used as the universal encryption standard for secure electronic communications in the civilian realm, all other cryptosystems being presumably forbidden. It was based on the idea of key escrow: that all Clipper keys be held in an archive and law enforcement could recover a copy of the encryption key for any given Clipper chip upon providing legitimate authorization to intercept communications. However, the Skipjack protocol used by the chip was buggy and insecure, and consumer CPUs became powerful enough that military-grade encryption was practical in software, rendering Clipper moot. A series of First Amendment rulings protected the proliferation of such software cryptosystems under the rubric that computer program code was protected speech.

      The Meta ruling gives the government an effective stick, First Amendment notwithstanding: if you facilitate communication that the government cannot break into, and someone abuses a child with help from your secure platform, you could be liable for contributing to the abuse of that child. A safe harbor from liability will be provided—by adopting key escrow based encryption (if you support encryption at all). This does not interfere with protected speech about cryptosystems, but it makes using cryptosystems difficult in practice due to the chilling effects.

      1 reply →

As a platform operator I think end to end encryption does no good in free products. It just makes you blamed for liability that you couldn’t foreseen or mitigate.

> (my emphasis) Meta said it would stop supporting end-to-end-encrypted messaging on Instagram later this year.

Whatsapp and messenger are still fine, then.

No. Meta is backtracking because the business case for and to end encryption is gone. They willingly will give the Trump administration whatever the want because they are not in the business of fighting authoritarian governments, they are in the virtue signalling business when governments are constrained by the rule of law.

The business case was to be able to say “we don’t know”. That case is gone.

Is it illegal or is it just illegal on general purpose platforms whose focus isn't extreme security?

We all know Meta can still read E2EE chats (otherwise they wouldn't do it) and they're using E2EE as an excuse to avoid liability for the things their platform encourages. Contrast this with something like Signal where the entire point is to be secure.

  • > We all know Meta can still read E2EE chats

    That can't be true, otherwise in what sense is it E2EE?

    • In the sense that calling it E2EE gives people a warm fuzzy feeling and makes people send more sensitive information over the platform.

      Has anyone actually audited it?

      7 replies →

    • Well, I've seen services describe having "E2EE" where one end is your computer and the other end is their server, so...

  • The first two E's in E2EE stand for end. From one end to the other. So no, Meta can't. Or put another way... if they can read those messages, then it's not E2EE.