← Back to context

Comment by dharmab

5 years ago

We’re pretty sure they get reports from Chrome. A security researcher at my workplace was running an exploit against a dev instance as part of their secops role and got the domain flagged, despite the site being an isolated and firewalled instance not accessible to the internet.

Yes, I have noticed that creating a brand new dev domain with crawler blocking norobots file, it is not found on any search on Google, until I open the dev url in Chrome, then bam! watch as their crawler starts trying to search through the site just from opening the url in Chrome.

This is why I never use Chrome. They scrape the Google Safe Browsing sent from chrome browsers and just do not care about privacy.

  • Maybe it's from search suggestion API? Anyway, I turn that off as soon as I create a new browser profile, along with the safe browsing list and automatic search when I type unrecognized URL. When I want to search I use search input of the browser. (ctrl+k) URL bar is for URLs only.

  • You realize that robots.txt is an "on your honor" system and that any one can write a script that doesn't look at robots.txt and post anything they find to the internet and that therefore other sites could find your site via 3rd party data.

    Chrome does not do what you claim it does

    • I have trialed this several times, not using chrome and everytime I then use it, the site can be found on google. Remember, these sites are completely unlinked and fresh URLs. So, yeah it really does..

But that means they can't verify it, right? Couldn't a malicious actor use this to attack their competitors?

Add an internal DNS entry for your competitor's domain, spin up an internal server hosting some malware and open it from chrome.