Comment by AnthonyMouse

3 months ago

> It seems nice but every single time I see service allowing anonymous uploads like such I’m thinking immediately: criminal use.

This seems like the Hollywood movie plot criminal use.

Actual criminals just put a normal server/proxy in a non-extradition country or compromise any of the zillion unpatched Wordpress instances on the internet or something equally boring.

Might I say that this whole safetyist moral panic is very convenient for large corporations? If you can't host your own service due to these concerns, you'll use the cloud :)

  • It's not a moral panic it's called "an extended engagement with law enforcement will be unpleasant and costly" and you probably don't want that.

    And if you're wondering why it's that way, then casually observe everytime people declare that people under arrest or being tried "don't deserve..." something.

    • The problem here is that we keep acting like the way we should solve this is by having people making toy projects or general purpose tools cower in fear of their own government and stop trying to make anything, instead of establishing a government that can distinguish between violent drug cartels and child abusers vs. innocent behavior or minor offenses and then not inflict senseless damage on the latter.

      4 replies →

It's even more boring: When I share criminal data (usually old movies that are still in copyright), I just put them in an encrypted 7zip archive and upload to google drive, then delete after my friend downloads it.

I mean, in this case we're talking about emoji, so I'm having a hard time picturing the criminal use, but in general anonymous file uploads or text uploads absolutely get used by criminals as soon as they're discovered. Anyone who's run a service for long enough will have stories of the fight against spam and CSAM (I do!).

  • > I mean, in this case we're talking about emoji, so I'm having a hard time picturing the criminal use, but in general anonymous file uploads or text uploads absolutely get used by criminals as soon as they're discovered

    You can use the emoji service as an anonymous data upload service because it transfers information and you can encode arbitrary data into other data. But that sounds like work and people are lazy and criminals are people so they'll generally do the lazy thing and use one of the numerous other options available to them which are less work than creating and distributing an emoji encoder.

    If you make a generic file upload service, well, they don't have to do as much work to use that. Then the question is, what should we do about that?

    The next question is, does preventing them from using a given service meaningfully prevent any crime? That one we know the answer to. No, it does not. Because they still have all of the other alternatives, like putting it on a server or service in a foreign country or compromising random Wordpress instances etc.

    Then we can ask, from the perspective of what the law should be and the perspective of a host under a given set of laws, what should we do? And these are related questions, because you want to consider how people are going to respond to a given set of laws.

    So, what happens if you impose strict liability on hosts regardless of whether they know that a given thing is crime? Well then you don't have any services hosting data for people because nobody has a 0% false negative rate but without one you're going to jail.

    What if you only impose liability if they know about it? Then knowing is a liability because you still can't have a 0% false negative rate, so they're going to prevent knowing and you end up with Mega encrypting user data so they can't themselves see it. That seems pretty dumb, you'd like them to be able to remove obvious bad stuff without putting liability on them if they're not 100% perfect.

    What if you only impose liability if someone else reports it? This works like the DMCA takedown process, and then you get a combination of the first two. They can allow uploads but they can also remove things they're aware of and want to remove, but they end up de facto required to remove anything anyone reports, because if they don't and they ever get it wrong then they're screwed. So then you get widespread takedown abuse and have created a trolling mechanism. This is not a great option.

    What if you let them moderate without any liability but require a court order to force them to take something down? This is like the approach taken by the CDA and is the best option, because you're not forcing risk-averse corporate bureaucrats to comply with evidence-free fraudulent takedowns but you still allow them to remove obvious spam etc. without liability. This leaves the service with a good set of incentives, because in general they'll want to satisfy users, so they'll try to remove spam etc. but not remove non-spam. Meanwhile this still leaves the option for crimes to be investigated by the people who are actually supposed to be investigating crimes, i.e. law enforcement, and then the courts can still order things to be taken down -- and more than that, put the actual criminals in jail -- without putting penalties on the service for not themselves being infallible adjudicators of what is and isn't crime.