← Back to context

Comment by teamspirit

5 years ago

Here-here! I really wish there was more human involvement in a lot of these seemingly arbitrary AI-taken actions. Everything from app review to websites and more. This heavy reliance on automated systems has led us down this road. Shoot, keep it, just give us the option to guarantee human review - with of course transparency. We don't need anymore "some human looked at this and agreed, the decisions is final, goodbye."

I know it's easier said than done, especially when taking the scale of the requests into account, but the alternative has, does, and will continue to do serious harm to the many people and businesses caught in this wide, automated net.

It's interesting how closely the unfolding of this awful scenario has followed an entirely predictable path based on the shifting incentives: now hundreds of thousands of businesses face the same massive hazard of blocklisted without adequate human review, and with mediocre options to respond to it if it occurs.

Without a shift in incentives, its unlikely the outlook will improve. Unless the organisations affected (and those vulnerable) can organise and exert enough pressure for google to notice and adjust course, we're probably going to be stuck like this -or worse- for a long time.

  • Blacklisting a site incorrectly seems like a perfectly adequate reason for a defamation lawsuit. So, I think the real issue is with the legal system.

  • > this awful scenario has followed an entirely predictable path

    The interesting things about predictable paths is that at the start there are a LOT of them, then over time there becomes just one of them. I don't see that this path was any more predictable at the start than any other.

It feels like the need for automated systems is a result of the ever-increasing size of the world (there are now nearly 5 billion internet users[0]). For Apple, app review can take days, mainly because doing human review [consistently] well and constantly for 8 hours a day isn't easy[1], leading to staffing issues when bad reviewers get weeded out and only a small percentage of hires stick around. Outside of hiring 10,000 employees just to endlessly review phishing links for 40 hours a week, you need automation to triage these phishing sites and deal with the outcome later such as via on-demand review by a human (which worked in this case, but won't always work - humans still make mistakes). I'm not sure if there is a solution for this problem outside of just not having the safe browsing product if 'makes no errors' is a requirement.

0: https://en.wikipedia.org/wiki/Global_Internet_usage

1: https://www.businessinsider.com/heres-why-it-really-sucks-to...

  • There's no reason the number of humans dealing with these problems can't scale alongside the number of humans creating them.

    But it's a lot cheaper to pay for a few really expensive programmers to make a just-good-enough AI than to pay for thousands of human moderators. So we end up with a stupid computer creating tonnes of human misery all for the sake of FAANG's already fat profit margins.

    • "So we end up with a stupid computer creating tonnes of human misery all for the sake of FAANG's already fat profit margins."

      I don't want to blame this entirely on the big companies, though. Also the people want and expect "free" things on the internet. This is how we ended up like this.

    • > There's no reason the number of humans dealing with these problems can't scale alongside the number of humans creating them.

      I would think the attackers are using automation also, to spam attacks as in other areas of fraud. It can only be a battle of AI ultimately.

      2 replies →

> I really wish there was more human involvement in a lot of these seemingly arbitrary AI-taken actions.

Narrator: but it was only ever to get worse

Couldn’t agree more, the transparency is key. It enables faith in the system and outcome.

The counter argument to transparency will be that it provides too much information to those who aim to build phishing sites not blocked by the filter.

That said, we’ve experienced systems in which obfuscation wins out over transparency and it would be nice to tackle the challenges of transparency.