Comment by vaduz
5 years ago
It's a relatively long article - but it does not answer one simple question, which is quite important when discussing this: were there any malicious files hosted on that semi-random Cloudfront URL? I realise that Google did not provide help identifying it - but that does not mean one should simply recomission the server under a new domain and continue as if nothing has happened!
From TFA:
> We quickly realized an Amazon Cloudfront CDN URL that we used to serve static assets (CSS, Javascript and other media) had been flagged and this was causing our entire application to fail for the customer instances that were using that particular CDN
> Around an hour later, and before we had finished moving customers out of that CDN, our site was cleared from the GSB database. I received an automated email confirming that the review had been successful around 2 hours after that fact. No clarification was given about what caused the problem in the first place.
Yes, yes, Google Safe Browsing can use its power to wipe you off the internet, and when it encounters a positive hit (false or true!) it does so quite broadly, but that is also exactly what is expected for a solution like that to work - and it will do it again if the same files are hosted under a new URL as soon as detects the problem again.
Author here. Nothing was fixed, and the blacklist entry was cleared upon requesting a review, with no explanation.
They seem to be unable to answer this question since Google provided no URL. Without knowing what is considered malicious, how could they check if there was anything? What if it is a false positive?
I am just guessing here, but in case the author had their service compromised, maybe he can't disclose the information. Feels like they know what they are doing, and at least to me, reading between the lines, it looks like they fixed their problem and they advice people to fix it too:
> If your site has actually been hacked, fix the issue (i.e. delete offending content or hacked pages) and then request a security review.
Author here. We didn't do anything other than request the flag to be reviewed.
The recommended steps for dealing with the issue listed in the article were not what we used, just a suggested process that I came up with when putting the article together. Clearly, if the report you receive from Google Search Console is correct and actually contains malware URLs, the correct way to deal with the situation is to fix the issue before submitting it for review.
Yes, I guess if you're allowing users to upload arbitrary files that may contain viruses or malware, and you're not scanning the files, that makes you a potential malware host. That's how Google may see it. They're trying to protect their users, and you've created a vector for infection.
Too bad they don't ban googleusercontent.com.
Whether or not this author's site was or was not hosting malicious content is irrelevant to the thrust of the article, which is that due to browser marketshare, Google has a vast censorship capability at the ready that nobody really talks about or thinks about.
Think about the jurisdiction Google is in deciding that they want to force Google to shut down certain websites that correspond to apps that they've already had them and Apple ban from the App Store, for "national security" or whatever.
This is one mechanism for achieving that.
If there was malicious content, the search console would have provided a sample URL. It didn't.