Comment by freedomben

5 years ago

After years of seeing developments like this, getting worse and worse, it fills me with rage to think about how clearly nobody in power at Google cares.

I naively used to think, "they probably don't realize what's happening and will fix it." I always try to give benefit of the doubt, especially having been on the other side so many times and seeing how 9 times out of 10 it's not malice, just incompetence, apathy, or hard priority choices based on economic constraints (the latter not likely a problem Google has though).

At this point however, I still don't think it's outright malice, but the doubling down on these horrific practices (algorithmically and opaquely destroying people) is so egregious that it doesn't really matter. As far as I'm concerned, Google is to be considered a hostile actor. It's not possible to do business on the internet in any way without running into them, so "de-Googling" isn't an option. Instead, I am going to personally (and advise my clients as well) to:

Consider Google as a malicious actor/threat in the InfoSec threat modeling that you do. Actively have a mitigation strategy in place to minimize damage to your company should you become the target of their attack.

As with most security planning/analyzing/mitigation, you have to balance the concerns of the CIA Triad. You can't just refuse Google altogether these days, but do NOT treat them as a friend or ally of your business, because they are most assuredly NOT.

I'm also considering AWS and Digital Ocean more in the same vein, although that's off topic on this thread. (I use Linode now as their support is great and they don't just drop ban hammers and leave you scrambling to figure out what happened).

Edit: Just to clarify (based on confusion in comments below), I am not saying Google is acting with malice (I don't believe they are personally). I am just suggesting you treat it as such for purposes of threat modeling your business/application.

Jon Williams, circa 1987, wrote a story of a far-flung humanity's future in "Dinosaurs," in which humans had been engineered into a variety of specialized forms to better serve humanity. After nine million years of tweaking, most of them are not too bright but they are perfect at what they do. Ambassador Drill is trying to prevent a newly discovered species, the Shar, from treading on the toes of humanity, because if the Shar do have even a slight accidental conflict as the result of human terraforming ships wiping out Shar colonies because they just didn't notice them, the rather terrifyingly adapted military subspecies branches of humanity will utterly wipe out the Shar, as they have efficiently done with so many others, just as a reflex. Ambassador Drill fears that negotations, despite his desire for peace, may not go well, because the terraforming ships will take a long time to receive information that the Shar are in fact sentient and billions of them ought not to be wiped out ...

Google, somehow, strikes me as this vision of humanity, but without an Ambassador Drill. It simply lumbers forward, doing its thing. It is to be modeled as a threat not because it is malign, but because it doesn't notice you exist as it takes another step forward. Threat modeling Lovecraft-style: entities that are alien and unlikely to single you out in particular, it's just what they do is a problem.

Google's desire for scale, scale, scale, meant that interactions must be handled through The Algorithms. I can imagine it still muttering "The algorithms said ..." as anti-trust measures reverse-Frankenstein it into hopefully more manageable pieces.

  • > Google's desire for scale, scale, scale, meant that interactions must be handled through The Algorithms

    That's fine when you're a plucky growth startup. Less fine when you run half the internet.

    If Google doesn't want to admit it's a mature business and pivot into margin-eating, but risk-reducing support staffing, then okay: break it back up into enough startup-sized chunks that the response failure of one isn't an existential threat to everyone.

    • This lack of staffing is something that really annoys me. It's all over the big tech companies, and is often cited as the reason why (for example) YouTube, Twitter, Facebook, etc cannot possibly proactively police (before publishing) all their user content due to the huge volume.

      Of course they can; Google and the rest earn enough to throw people at the problems they cause/enable. If they can't, then they should stop. If you cannot scale responsibly, then you should not scale at all as your business has simply externalised your costs onto everyone else you impact.

      2 replies →

    • I agree. Google is such a large behemoth who actively tries to avoid customer support if they can. Splitting it to smaller business with a bit of autonomy and not having to rely on ad money fueling everything else means those smaller businesses have to give a shit about customers and compete on even ground.

      Same applies to Facebook and other tech companies. The root issue is taking huge profits from area of business into other avenues which compete with the market on unfair ground (or out right buying out competition)

      However anti-trust in US has eroded significantly.

      3 replies →

    • > That's fine when you're a plucky growth startup. Less fine when you run half the internet.

      It's never fine.

      The abdication of responsibility and, more importantly, liability to algorithms is everything that's wrong with the internet and the economy. The reason these tech conglomerates are able to get so big when companies before them couldn't is because it's impossible to scale the way they have without employing thousands of humans to do the jobs that are being poorly done by their algorithms. Nothing they're doing is really a new idea, they just cut costs and made the business more profitable. The promise is that the algorithms/AI can do just as good of a job as humans but that was always a lie and, by the time everyone caught on, they were "too big to fail".

      1 reply →

    • This is probably a big part of why Google is invested in (limited) AI, because a good enough "artificial support person" means having their cake and eating it too.

      1 reply →

  • > It simply lumbers forward, doing its thing. It is to be modeled as a threat not because it is malign, but because it doesn't notice you exist as it takes another step forward.

    This is a concept that I think deserves more popular currency. Every so often, you step on a snail. People actually hate doing this, because it's gross, and they will actively seek to avoid it. But that doesn't always work, and the fact that the human (1) would have preferred not to step on it; and (2) could, hypothetically, easily have avoided doing so, doesn't make things any better for the snail.

    This is also what bothers me about people who swim with whales. Whales are very big. They are so big that just being near them can easily kill you, even though the whales generally harbor no ill intent.

    • I'm curious if whales more dangerous on an hour-by-hour basis than driving?

      That's generally my rubric for whether a safety concern is possibly worth avoiding an activity over.

      2 replies →

  • > “You will have killed us,” Gram said, “destroyed the culture that we have built for thousands of years, and you won’t even give it any thought. Your species doesn’t think about what it does any more. It just acts, like a single-celled animal, engulfing everything it can reach. You say that you are a conscious species, but that isn’t true. Your every action is... instinct. Or reflex.

    Good story. I can imagine what the specialized humans did to the generalist humans eons ago.

  • Except in our case, Google's terraforming ships couldn't care less. It's just not part of their programming that there might be some intelligent life out there worth caring about that might be hurt by their actions, so there's no way for them to receive this information. It's not that it's hard to explain, there's nobody to explain it to.

  • Modern large corporations are just an more inefficient, less effective paperclip maximizer, with humans gumming up the works.

    Google is striving hard to remove the "human" part of the problem.

    • After finished reading the parent comment,

      > Google's desire for scale, scale, scale, meant that interactions must be handled through The Algorithms. I can imagine it still muttering "The algorithms said ..." as anti-trust measures reverse-Frankenstein it into hopefully more manageable pieces.

      I immediately pressed C-f to search the string "paperclip maximizer", and was not disappointed. Thanks for mentioning it.

  • Your making another perfect case of why Google should be broken up. It’s important that we can choose again.

"never attribute to malice that which is adequately explained by stupidity" and all that, but after the events and the almost perfectly orchestrated behavior we've seen in the past and last couple of weeks it's becoming increasingly difficult, at least to me, to not attribute this to malice. Probably deliberate negligence is a better term. They know their systems can make mistakes, of course they do, and yet they build many of their ban-hammers and enforce them as if hat wasn't the case.

This approach to system's engineering is the technological equivalent of the personality trait I most abhor: the tendency to jump quickly to conclusions and not be skeptical of one's own world-view.

[1] https://en.m.wikipedia.org/wiki/Hanlon%27s_razor#cite_note-m...

  • "Consciously malicious" is not a good rule of thumb standard to measure threats to yourself or your business; it only accounts for a tiny bit of all possible threats. GP isn't claiming that Google is consciously malicious, they are claiming that you should prepare as if they were. These are not the same thing.

    A lion may not be malicious when it's hunting you, it's just hungry; look out for it anyway. A drunk driver is unlikely targeting you specifically; drive carefully anyways. Nobody at Google is specifically thinking "hehehe now this will ruin jdsalareo's business!" but their decisions are arbitrary, generally impossible to appeal, and may ruin you regardless; prepare accordingly.

    • Yes, exactly what I meant, thank you.

      And very well said I might add. I don't mean to leave a vapid "I agree with you" comment, but your analogies are fantastic. They are accurate, vivid, and easily understandable.

  • I think mistakes just happen and are possibly just as helpful as they are harmful to Google. If they find something they particularly hate or damaging they can just "oops" their way to the problem being gone. Take Firefox[1], each time a service went "oops" on Firefox they gained marketshare on Chrome.

    I have no doubt they'd use similar "oops" for crushing a new competitor in the ad space. Or perhaps quashing a nascent unionizing effort. It's all tinfoil of course because we don't have any public oversight bodies with enough power to look into it.

    [1] https://www.techspot.com/news/79672-google-accused-sabotagin...

    • That's the nature of a dominant position. It gives you the power to engineer "heads I win, tails you lose" dynamics.

  • Well, I think the stupidity and laziness is exacerbated by their ill will towards customers and users. This is also what prevents them from reforming. The general good will and sense of common purpose was necessary in Google's early days when they portrayed themselves as shepherds of the growth of the web. Now they are more like feudal tax collectors and census takers. Sure they are mostly interested in extracting their click-tolls, but sometimes they just do sadistic stuff because it feels good to hurt people and to be powerful. Any pseudo-religious sense of moral obligation to encourage 'virtuous' web practices has ossified, decayed, been forgotten, or been discarded.

    • I was thinking about this this week in the context of online shopping with in store delivery. My wife recently waited nearly half an hour for a “drive up” delivery where she had to check in with an app. Apparently the message didn’t make it to the store, and when she called half way into her wait she wasn’t greeted with consolation, but derision for not understanding the failure points in this workflow.

      It seems that the inflexible workflows of data processing have crept into meatspace, eliminating autonomy from workers job function. This has come at the huge expense of perceived customer service. As an engineer who has long worked with IT teams creating workflows for creators and business people, I see the same non-empathetic, user-hostile interactions well known in internal tools become the standard way to interact with businesses of all sizes. Broken interactions that previously would be worked around now leave customer service reps stumped and with no recourse except the most blunt choices.

      This may be best for the bottom line, but we’ve lost some humanity in the process. I fear that the margins to return to some previously organic interaction would be so high that it would be impossible to scale and compete. Boutique shops still offer this service, but often charge accordingly and without the ability to maintain in person interactions at the moment, I worry there won’t be many left when pandemic subsides.

      8 replies →

  • >”never attribute to malice that which is adequately explained by stupidity"

    I keep reading this on the internet as if it’s some sort of truism, but every situation in life is not a court where a prosecutor is trying to prove intent.

    There is insufficient time and resources to evaluate each and every circumstance to determine each and every causative factor, so we have to use heuristics to get by and make the best guesses. And sometimes, even many times, people do act with malice to get what they want. But they’re obviously not going to leave a paper trail for you to be able to prove it.

    • > I keep reading this on the internet as if it’s some sort of truism

      I don’t believe this statement was initially intended to be axiomatic, rather, to serve as a reminder that the injury one is currently suffering is perhaps more likely than not, the result of human frailty.

      12 replies →

    • The saying is for your own sanity. If you go around assuming every mistake is malicious, it’s going to fuck up your interactions with the world.

      Everyone I know who approaches the world with a me vs. them mentality appears to be constantly fraught with the latest pile of actors “trying to fuck them”.

      It’s an angry, depressing life when you think that the teller at the grocery store is literally trying to steal from you when they accidentally double scan something.

      2 replies →

    • I think you have a point, and it's important to not be naive as people out there will steamroll those around them if given the opportunity. Personally I try to not immediately assume malice because I've found it leads to conspiracy-minded thinking, where everything bad is due to some evil "them" pulling the strings. While I'm sure there are some real "Mr. Burns" types out there, I can't help but feel most people (including groups of them as corporations) are just acting in self-interest, often stumbling while they do it.

    • It's a truism not because people are never malicious, but because we tend to see agency where there is none. Accidents are seen as intentional. This tendency leads to conspiracy theories, superstitions, magical thinking, etc. We're strongly biased towards interpreting hurtful actions as malice.

  • I'd add to this that willfully refusing to remedy stupid can be an act of malice.

    • That's a very good point. Actually, I just thought about something in the context of this conversation: one's absolute top priority, both in life and tech, should be to stop the bleeding[1] that emerges from problematic circumstances.

      Whether those problematic circumstances, harm, arise due to happenstance, ignorance, negligence, malice, mischievousness, ill intentions or any other possible reason is ancillary to the initial objective and top priority of stopping the bleeding. Intent should be of no interest to first respondents, rather customers or decision makers in our case, when harm has materialized.

      Establishing intent might be useful or even crucial for the purposes of attribution, negotiation, legislation, punishment, etc. All those, however, are only of interest, in this context, when the company in question hasn't completely damaged their brand and the public, us, hasn't become unable to trust them.

      All this to say, yes, this is a terrible situation to be in, how are we going to solve it?

      Do I care if Google is doing harm to the web due to being wilfully ignorant, negligent, ill-intentioned, etc? no, not an iota, I care about solving the problem. Whether they do harm deliberately or for other reasons should be of no interest to me in the interest of stopping the bleeding.

      [1] https://isc.sans.edu/diary/Making+Intelligence+Actionable/41...

      1 reply →

  • Employees and managers at Google get promoted by launching features and products. They're constitutionally incapable of fixing problems caused by over-active features for the same reason they've launched seven different chat apps.

  • I personally find Hanlon's Razor to be gratuitously misapplied. Corporate strategy is often better described as weaponized willful ignorance. You set up a list of problems that shall not be solved or worked on, and that sets the tone of interaction with the world.

    Plus financial incentive creates oh so many opportunities for things to go wrong or be outright miscommunicated it is not even funny.

  • Thanks, I totally agree. Just to be clear I'm not saying it's malice as I don't believe that. I'm just saying the end result is the same so one should consider them a hostile actor for purposes of threat modeling.

    Given you're the second person who I think took away that I was accusing them of malice, I probably need to reword my post a bit to reduce confusion.

    Accusing them of malice is irresponsible without evidence, and if I were doing that it would undermine my credibility (which is why I'm pointing this out).

    • > Thanks, I totally agree. Just to be clear I'm not saying it's malice as I don't believe that. I'm just saying the end result is the same so one should consider them a hostile actor for purposes of threat modeling.

      No worries at all! I interpreted your post the way you intended; and I agree fully being also in InfoSec.

      Going by how you phrased your original post, you're probably more patient and/or well-intentioned than me as I'm farther along the path of attributing mistakes by big, powerful corporations to malice right away.

  • Your comment made me think that they have the same attitude with support as they do with hiring, they are ok with a non fine-tuned model as long as the false positives / negatives impact individuals rather than Google’s corporate goals.

  • I would argue that a consistent behave defeats the benefit of the doubt or involuntary stupidity. Also I believe most of good sounding quotes may be easy to remember but not backed by many truths.

Author here. I don't think it's malice on their part, but their hammer is too big to be wielded so carelessly.

  • Yes I agree with you (and thank you for your medium post by the way. Our only chance of ever improving the situation is to call attention to it. I fully believe Google leadership has to be aware of it at this point, but it clearly won't be a priority to them to fix until the public backlash/pressure is great enough that they have to).

    Just to avoid any misreading, I didn't say I thought it was malice on Google's part. My opinion (as mentioned above, is):

    > I still don't think it's outright malice, but the doubling down on these horrific practices (algorithmically and opaquely destroying people) is so egregious that it doesn't really matter.

    So they are not (at least in my opinion without seeing evidence to the contrary) outright malicious. But from the perspective of a site owner, I think they should be considered as such and therefore mitigations and defense should be a part of your planning (disaster recovery, etc).

    • I do not trust management folks, whose paychecks and promotions are dependent on how successful such hostile actions are, to take the right decisions. I also do not think that they are deliberately ignorant/indifferent or that calling attention to it will do any good. These types of individuals got to where they are largely by knowing fully well that their actions are malicious and legal. I used to work under such people, and currently interact with and work with such people on a very regular basis (you could even consider me as part of them tbh). It is very much possible that the management level folks at Google don't have an ounce of goodness in them, and will always see such decisions from a zero-sum perspective.

      To make it relatable, do you care so much for a mosquito if it's buzzing around you, disrupting your work and taking a toll on your patience? Because your SaaS is a mosquito to Google. After a certain point, you will want to kill the mosquito, and that's exactly what Google execs think so as to get to their next paycheck.

  • They have the option of not wielding the hammer. I for one never appointed them the guardian of the walled internet.

    • So browsers should just let users go to obvious phishing sites?

      It's easy to take this position when you're very tech savvy. Imagine how many billions of less tech savvy people these kinds of blocklists are protecting.

      It's very easy to imagine a different kind of article being written: "How Google and Mozilla let their users get scammed".

      2 replies →

    • > I for one never appointed them the guardian of the walled internet.

      On the other hand, lots of chrome users most likely do trust google to protect them from phishing sites. For those ~3 billion users a false positive on some SaaS they've never heard of is a small price to pay.

      It's a tricky moral question as to what level of harm to businesses is an acceptable trade off for the security of those users.

      6 replies →

  • Have you considered not using a 3rd party for hosting your JavaScript? There is always going to be some risk if the code isn’t under your control.

  • Is this list only maintained by Google? Do Firefox and Bing use the same list, is their process better/different? Is there any sharing happening?

  • Agree, we can only vote with our clicks.

    Sadly gmail and google docs are top notch products :(

    • No, we can't vote with our clicks. That's what it means when a handful of companies dominate most of the web and the web playing a dominant role in global economy.

      We have very little real choice.

      Occasionally people will pretend this is not so. In particular those who can't escape the iron grasp these companies have on the industry. Whose success depends on being in good standing with these companies. Or those whose financial interests strongly align with the fortunes of these dominant players.

      I own stock in several of these companies. You could call it hypocrisy, or you could even view it as cynicism. I choose to see it as realism. I have zero influence over what the giants do, and I do have to manage my modest investments in the way that makes the most financial sense. These companies have happened to be very good investments over the last decade.

      And I guess I am not alone in this.

      I guess what most of us are waiting for is the regulatory bodies to take action. So we don't have to make hard choices. Governments can make a real difference. That they so far haven't made any material difference with their insubstantial mosquito bites doesn't mean we don't hold out some hope they might. One day. Even though the chances are indeed very nearly zero.

      What's the worst that can happen to these companies? Losing an antitrust lawsuit? Oh please. There are a million ways to circumvent this even if the law were to come down hard on them. They can appeal, delay, confuse and wear down entire governments. If they are patient enough they can even wait until the next election - either hoping, or greasing the skids, for a more "friendly" government.

      They do have the power to curate the reality perceived by the masses. Let's not forget that.

      Eventually, like any powerful industry they will have lobbyists write the laws they want, and their bought and paid for politicians drip them into legislation as innocent little riders.

      We can't vote with our clicks. We really can't in any way that matters.

      That being said, I also would like regulatory bodies to step in and do something about it. To level the playing field. If nothing else, to create more investment opportunities.

      4 replies →

  • Great article. It’s not malice, it’s indifference.

    Googles execs and veeps don’t care about small businesses, because most are career ladder climbers who went straight from elite colleges to big companies. Conformists who won’t ever know what it’s like to be a startup. As a group, empathy isn’t a thing for them.

  • That is malice.

    Accidentally unleashing a process that harms people is negligence. Not caring that you are being negligent is malice.

  • IMHO, it sounds like it worked. The things you changed sound like it's made your site more secure. In the future, Googles hammer can be a bit more precise since you've segregated data.

    And you don't know what triggered it. It's possible that one of your clients was compromised or one of their customers was trying to use the system to distribute malware.

    • It's only more secure from Google's blacklist hammer.

      No significant security is introduced by splitting our company's properties into a myriad of separate domains.

      This type of incident can be a deadly blow to a B2B SaaS company since you are essentially taking out an uptime sensitive service that a lot of times has downtime penalties written down in a contract. Whether this is downtime will depend on how exactly the availability definition is written.

      4 replies →

It's probably "scale thinking" that makes google seem like they don't care: Everything is huge when you're "at scale"; the impact of a small blunder can take down companies or black out nation states. It's part of the game of being "at scale". They probably believe that it's untenable to build the necessary infrastructure to where everything (website, startup, person, etc.) matters.

This will sound crass, but it reminds me of Soviets cutting off the food supply to millions of people over the winter, due to industrial restructuring, and they brushed it off as "collateral damage".

Of course they care. They've taken over everything they've been able to take over and they're still going strong. This is not by mistake. They just care about different things than you do. This is why Google needs to be broken up.

> I am not saying Google is acting with malice (I don't believe they are personally)

I'd agree. The problem is there is no financial or regulatory incentive to do the right thing here.

It has zero immediate impact on their bottom line to have things work in the current fashion, and the longer term damage to their reputation etc. is much harder to quantify.

There's no incentive for them to fix this, so why would they?

They're never gonna care. They aren't incentivized to care. The only thing that can change the situation is the power of the American federal government, which needs to break Alphabet into 20-50 different companies.

> nobody in power at Google cares

My assessment might be “nobody in power has time to prevent the myriad of problems happening all of the time, even though they handle the majority, with help from businesses, government agencies, etc., and given the huge impact of some problems to society as a whole, they may feel as though they’re rising in the front seat of a roller coaster, unaware of your single voice among billions from the ground down below.”

> they probably don't realize what's happening and will fix it

“If only the czar knew!”

I'm with you on the rest, but what has DO done to not have the benefit of doubt?

Also, to your point, an organization becomes something else than the sum of its parts, especially the bigger it gets.

Google can be a malicious actor without necessarily having individuals make act maliciously.

  • Yeah that's a fair question. I had a bad personal experience with them, but I've also seen plenty of issues too. There was a big one a little while ago about how Digital Ocean destroyed somebody's entire company by banning them with AI: https://twitter.com/w3Nicolas/status/1134529316904153089

    In their defense they acknowledged it and some changes. I can't find the blog post now so going from memory. But that only happened because he got lucky and it blew up on HN/twitter and got the attention of leadership at DO. How many people have beenh destroyed in silence?

    In my case, Digital Ocean only allows one payment card at a time and my customer (for whom the services were running) provided me with a card that was charged directly.

    A couple months later my customer forgot that he had provided the card. He didn't recognizer "Digital Ocean" and thought he had been hacked (which has happened to him before) and called the bank and placed a chargeback.

    When DO got the charge back they emailed me and also completely locked my account so I was totally unable to access the UI or API. I didn't find out about the locked account until the next day. I responded to the email immediately, and called my customer, who apologized and called the bank to reverse the chargeback. I was as responsive as they could have asked for.

    The next day I needed to open a port in the firewall for a developer to do some work. I was greeted with the dreaded "account logged" screen. I emailed them begging and pleading with them to unblock my account. They responded that they would not unlock the account until the chargeback reversal had cleared. Research showed that it can take weeks for that to happen.

    I emailed again explaining that this was totally unacceptable. It is not ok to have to tell your client "yeah sorry I can't open that firewall port for your developer because my account is locked. Might be a couple of weeks." After a day or so, they finally responded and unlocked my account. Fortunately they didn't terminate my droplets, but I wonder what would have happened if I had already started using object storage as I had been planning. This was all over about $30 by the way.

    After that terrifying experience, I decided staying on DO was just too risky. Linode's pricing is nearly identical and they have mostly the same features. Prior to launching my new infrastructure I emailed their support asking about their policy. They do not lock accounts unless the person is long-term unresponsive or has a history of abuse.

    I've talked with Linode support several times and they've always been great. They're my go to now.

    • I see where you're coming from. I've also had a bad experience with DO (CC arbitrarily blocked them which ended up with my droplets getting terminated and all data and backups wiped). That was at least as much an error on my part, though.

      It does seem that they're unfortunately borrowing the playbook from AWS/Azure/GCP wrt over-automization as they scale. More old-school support could have been their differentiator, but it seems they're going for growth. They're getting close to the razor's edge.

I'd go a step further and claim that most tech companies are ultimately a threat to people's freedom and happiness. Not the tech itself, but the people that wield and profit from it.

They care, but the dominant policy in Google's calculus about what features should be released is "Don't let the exceptional case drown the average case." A legitimate SaaS providing business to customers might get caught by this. But the average case is it's catching intentional bad actors (or even unintentional bad actors that could harm the Chrome user), and Google isn't going to refrain from releasing the entire product because some businesses could get hit by false positives. They'd much rather release the service and then tune to minimize the false positives.

To my mind, one of the big questions about mega corporations in the internet service space is whether this criterion for determining what can be launched is sufficient. It's certainly not the only criterion possible---contrast the standard for us criminal trial, which attempts to evaluate "beyond a reasonable doubt" (i.e. tuned to be tolerant of false negatives in the hope of minimizing false positives). But Google's criterion is unlikely to change without outside influence, because on average, companies that use this criterion will get product to market faster than companies that play more conservatively.

  • Nah-- I think you've got it all wrong. The problem isn't the false positive/false negative ratio chosen.

    The problem is that there's false positives with substantial harm caused to others and with little path left open to them by Google to fix them / add exceptions-- in the name of minimizing overhead.

    Google gets all of the benefit of the feature in their product, and the cost of the negatives is an externality borne by someone else that they shrug off and do nothing to mitigate.

    • One solution, perhaps, could be to have some kind of turnaround requirement---a "habeas corpus" for customer service.

      By itself, it won't solve the problem... The immediate reaction could be to address the requirement by resolving issues rapidly to "issue closed: no change." But it could be a piece of a bigger solution.

Google Safe Search is only half the story. Another huge problem is Google's opaque and rash decisions about what sites throw up warnings in Chrome.

I once created a location-based file-transfer service called quack.space [0] very similar to Snapdrop, except several years before they existed. Unfortunately the idiot algorithms at Chrome blocked it, throwing up a big message that the site might contain malware. That was the end of it.

I had several thousand users at one point, thought that one day I might be able to monetize it with e.g. location based ads or some other such, but Google wiped that out in a heartbeat with a goddamn Chrome update.

People worry about AI getting smart enough to take over humans. I worry about the opposite. AI is too stupid today and is being put in charge of things that humans should be in charge of.

[0] https://www.producthunt.com/posts/quack-space

[1] https://snapdrop.net/

Google has a lot of control of the Web.

Much less control of the Internet.

One lesson is use IP and not the Web.

> I use Linode now as their support is great and they don't just drop ban hammers and leave you scrambling to figure out what happened.

Linode once gave me 48 hours to respond (with threats to take down the site) because a URL was falsely flagged by netcraft based on what looked like an automated security scan of software I was hosting. Granted, they did not take any action and dropped the report once I pointed out that it was bullshit, but I do not consider this great service. If there is no real evidence of wrongdoing I should not be receiving ultimatums.

(Googler)

You are only focusing on the negatives while completely ignoring the positives here.

Here are a few questions to consider that may give you better perspective:

1) Do you know the magnitude of financial and psychological damage caused by malware, phishing, etc on the web?

2) Do you believe that it is possible to have a human review every piece of automation generated malware on the internet?

3) Do you believe it is possible to build an automated system that provides value with zero false positives?

4) Do you think an open standards body or government bureau would perform any better at implementing protections from the threats described here?

  • Author here - I don't underestimate the complexity of the task that Google Safe Browsing tries to accomplish.

    But: Do you believe there is no room for improvement in an automated, opaque system with clear evidence of malfunction, that quite succinctly decides if hundreds of people go unemployed when their company tanks for nothing other than an incorrectly set threshold on some algorithm?

    That is the real question to ask. Google is nowhere near its limits in terms of capability, as is made abundantly clear by its extremely comfortable financial position.

    • I do agree that there's room for improvement. There's always room for improvement, but there are also limits to the transparency one should provide for an anti-abuse system. It's difficult for anybody except for an expert in this area to say what would be a safe and satisfactory way to expose appeal and remediation for false positives. In the example from the story it looks like the turn around time was just an hour for your case, which seems rather good. The fact that not all consumers of this data were as responsive looks out of Google's control, and should be taken up with those companies.

      I don't agree with the premise of your last question. It's not Google's responsibility to protect the internet and provide a free anti-abuse database for other browser vendors, and yet Google does do this at significant cost. The fact that they don't do it perfectly is not a rationale for killing it or providing it with infinite resources.

      1 reply →

  • 2*) Do you believe that it is possible to have a human review every FALSE POSITIVE result from automated malware detection on the internet, when reported by those adverse affected by the false positive result?

    Yes, yes I do. Banks do it for their customers today at scale.

    • So what happens when the fraudsters automate clicking the "request review" button? They can spin up as many phishing sites as they want, and request as many human hours in review as they want.

      With banks, they only have to do that for their customers, whom they've at least had a chance of getting money from. But Google would need to provide it to every site which gets blocked, (as malware sites pretend to be legitimate). Which

      1 reply →

Your clients will hate you for this as you are creating false positives. Sure, Google is sometimes unethical, but calling them a malicious actor? Really?

Following "Consider Google as a malicious actor/threat" with "I am not saying Google is acting with malice" is probably a strong indicator that you should have thought it through before posting it.

  • "Consider as" does not mean "is". Your lack of reading comprehension is not the fault of the poster.