Comment by 0x426577617265
5 years ago
I think the author highlights the main issue at the end of the article. This is where pressure needs to be applied. I get it, Google’s process probably protects a lot of end users from malicious sites. Getting a real business added to this blocklist by a bot though is not cool. Perhaps a process to whitelist your own domains if this power can’t be wrangled from Google.
> Google literally controls who can access your website, no matter where and how you operate it. With Chrome having around 70% market share, and both Firefox and Safari using the GSB database to some extent, Google can with a flick of a bit singlehandedly make any site virtually inaccessible on the Internet.
> This is an extraordinary amount of power, and one that is not suitable for Google's "an AI will review your problem when and if it finds it convenient to do so" approach.
> Getting a real business added to this blocklist by a bot though is not cool.
Real businesses can (and often do) host malware too. There was a notable event where php.net was hacked and hosting malware, which Google flagged. The owner of php.net was pretty mad at first and claimed it was a false positive. It wasn't.
Not to mention thousands and thousands of unsecured Wordpress and other similar systems which were turned into malware delivering botnets.
At my local faculty there were at some point not less than 6 different malware serving sites (Wordpress, Drupal and some similar unpatched sofware), which were happily delivering all that data from a university domain.
Right, I’m not saying they aren’t a risk. I’m suggesting that if a real business is whitelisted that a automated process shouldn’t be allowed to blacklist it without some type of human interaction.