← Back to context

Comment by ocdtrekkie

5 years ago

The solution is simple: Liability. As soon as it becomes legally infeasible to let algorithms block people, it will stop happening.

Make it easy and affordable to submit legal complaints for tech misbehavior and make the penalties hurt.

Ah, so you suggest liability for the vendors of the software blocking websites, with, in practice [1], no liability for the operators of a compromised website, if it is phishing/malware?

This is a great approach, if your goal is to optimize for increasing the amount of dangerous crap on the web. But, eh, that's surely worth it, because the profitability of startups is more important then little things like the security of the average netizen...

[1] Even if you make the operators liable [2], in practice, you'll never be able to collect from most of them. Whereas the blacklist curators are a singular, convenient target...

[2] If you can demonstrate how the operators of compromised websites can be held liable for all the harm they cause, I will happily agree that we should do away with blacklists. Unfortunately, the technical and legislative solutions for this are much worse than the disease you are trying to treat.

  • Since phishing is not going to go anywhere with or without blacklists - for obvious reasons, e.g. lists can't cover everything, and you can't add sites to the list instantly - I am willing to tolerate a slight increase in fishing which is going to exist anyway in exchange for not having Google (or any other megacorp, or any other organization for that matter) as a gatekeeper of everybody's access to the internet. The potential for abuse of such power is much greater and much more dangerous than the danger from tiny increase of phishing.

    • > I am willing to tolerate a slight increase in fishing

      According to Google's most recent transparency report[1], as of December 20th of last year they were blocking around 27,000 malware distribution sites and a little over 2,000,000 phishing sites.

      In your view, would turning off those blacklists and allowing those >2,000,000 sites to become functional again count as a "slight" increase?

      (edit: That's a real question, incidentally, not a disagreement or an attempt at a 'zing'; I have no knowledge in this area but went to look up the numbers, and am curious whether 2,000,000 is truly a vanishingly small amount, relative to everything else that's out there that's not already on the list)

      [1]: https://transparencyreport.google.com/safe-browsing/overview

      2 replies →