← Back to context

Comment by michaelmior

16 hours ago

Yes, but this leaves the only way to identify this behavior as by reporting from a minor. I'm not saying I trust TikTok to only do good things with access to DMs, but I think it's a fair argument in this scenario to say that a platform has a better opportunity to protect minors if messages aren't encrypted.

I'm not saying no E2E messaging apps should exist, but maybe it doesn't need to for minors in social media apps. However, an alternative could be allowing the sharing of the encryption key with a parent so that there is the ability for someone to monitor messages.

> I think it's a fair argument in this scenario to say that a platform has a better opportunity to protect minors if messages aren't encrypted

Would it be a fair argument to say the police have a better opportunity to prevent crimes if they can enter your house without a warrant? People are paranoid about this sort of thing not because they think law enforcement is more effective when it is constrained. But how easily crimes can be prosecuted is only one dimension of safety.

> However, an alternative could be allowing the sharing of the encryption key with a parent

Right, but this is worlds apart from "sharing the encryption key with a private company", is it not?

  • > Would it be a fair argument to say the police have a better opportunity to prevent crimes if they can enter your house without a warrant?

    This is a false equivalency. I don't have to use TikTok DMs if I want E2EE. I don't have a choice about laws that allow the police to violate my rights. I'm not claiming that all E2EE apps should be banned.

    > Right, but this is worlds apart from "sharing the encryption key with a private company", is it not?

    Exactly why I suggested that as a possible alternative.

  • > Would it be a fair argument to say the police have a better opportunity to prevent crimes if they can enter your house without a warrant?

    Police can access your home with a warrant.

    Police cannot access your E2EE DMs with a warrant.

    • Not answering my question!

      > Police cannot access your E2EE DMs with a warrant.

      They can and do, regularly. What they can't do is prevent you from deleting your DMs if you know you're under investigation and likely to be caught. But refusing to give up encryption keys and supiciously empty chat histories with a valid warrant is very good evidence of a crime in itself.

      They also can't prevent you from flushing drugs down the toilet, but somehow people are still convicted for drug-related crimes all the time. So - yes, obviously, the police could prosecute more crimes if we gave up this protection. That's how limitations on police power work.

      2 replies →

    • And they shouldn't be able to. Police accessing DMs is more like "listening to every conversation you ever had in your house (and outside)" than "entering your house".

    • >Police cannot access your E2EE DMs with a warrant.

      Well the kind of can if they nab your cell phone or other device that has a valid access token.

      I think it's kind of analogous to the police getting at one's safe. You might have removed the contents before they got there but that's your prerogative.

      I think this results in acceptable tradeoffs.

  • Yes, that is a fair argument and most countries allow the use of surveillance cameras in public for this reason.

Are you suggesting all messaged photos should be scanned, and potentially viewed by humans, in case it depicts a nude minor? Because no matter how you do that, that would result in false positives, and either unfair auto-bans and erroneous reports to law enforcement (so no human views the images), or human employees viewing other adults' consensual nudes that were meant to be private. Or it would result in adult employees viewing nudes sent from one minor to another minor, which would also be a major breach of those minors' privacy.

There is a program whereby police can generate hashes based on CSAM images, and then those hashes can be automatically compared against the hashes of uploaded photos on websites, so as to identify known CSAM images without any investigator having to actually view the CSAM and further infringe on the victim's privacy. But that only works vs. already known images, and can be done automatically whenever an image is uploaded, prior to encryption. The encryption doesn't prevent it.

Point being, disallowing encryption sacrifices a lot, while potentially not even being that useful for catching child abusers in practice.

I'm sure some offenders could be caught this way, but it would also cause so many problems itself.

  • > Are you suggesting all messaged photos should be scanned, and potentially viewed by humans, in case it depicts a nude minor?

    No, I was not suggesting that.