Comment by yuvalr1
5 years ago
This is an amazing story. It really demonstrates the way we pave our road to hell with good intentions...
We should really do something about this issue, where so few companies (arguably, a single one) hold so much power over the most fundamental technology of the era.
Here-here! I really wish there was more human involvement in a lot of these seemingly arbitrary AI-taken actions. Everything from app review to websites and more. This heavy reliance on automated systems has led us down this road. Shoot, keep it, just give us the option to guarantee human review - with of course transparency. We don't need anymore "some human looked at this and agreed, the decisions is final, goodbye."
I know it's easier said than done, especially when taking the scale of the requests into account, but the alternative has, does, and will continue to do serious harm to the many people and businesses caught in this wide, automated net.
It's interesting how closely the unfolding of this awful scenario has followed an entirely predictable path based on the shifting incentives: now hundreds of thousands of businesses face the same massive hazard of blocklisted without adequate human review, and with mediocre options to respond to it if it occurs.
Without a shift in incentives, its unlikely the outlook will improve. Unless the organisations affected (and those vulnerable) can organise and exert enough pressure for google to notice and adjust course, we're probably going to be stuck like this -or worse- for a long time.
Blacklisting a site incorrectly seems like a perfectly adequate reason for a defamation lawsuit. So, I think the real issue is with the legal system.
1 reply →
> this awful scenario has followed an entirely predictable path
The interesting things about predictable paths is that at the start there are a LOT of them, then over time there becomes just one of them. I don't see that this path was any more predictable at the start than any other.
It feels like the need for automated systems is a result of the ever-increasing size of the world (there are now nearly 5 billion internet users[0]). For Apple, app review can take days, mainly because doing human review [consistently] well and constantly for 8 hours a day isn't easy[1], leading to staffing issues when bad reviewers get weeded out and only a small percentage of hires stick around. Outside of hiring 10,000 employees just to endlessly review phishing links for 40 hours a week, you need automation to triage these phishing sites and deal with the outcome later such as via on-demand review by a human (which worked in this case, but won't always work - humans still make mistakes). I'm not sure if there is a solution for this problem outside of just not having the safe browsing product if 'makes no errors' is a requirement.
0: https://en.wikipedia.org/wiki/Global_Internet_usage
1: https://www.businessinsider.com/heres-why-it-really-sucks-to...
There's no reason the number of humans dealing with these problems can't scale alongside the number of humans creating them.
But it's a lot cheaper to pay for a few really expensive programmers to make a just-good-enough AI than to pay for thousands of human moderators. So we end up with a stupid computer creating tonnes of human misery all for the sake of FAANG's already fat profit margins.
4 replies →
:s/Here-here/Hear hear/
Hear hear!
Where where!?
Here here!
> I really wish there was more human involvement in a lot of these seemingly arbitrary AI-taken actions.
Narrator: but it was only ever to get worse
Couldn’t agree more, the transparency is key. It enables faith in the system and outcome.
The counter argument to transparency will be that it provides too much information to those who aim to build phishing sites not blocked by the filter.
That said, we’ve experienced systems in which obfuscation wins out over transparency and it would be nice to tackle the challenges of transparency.
Are you implying that the list no longer has a good intention? I wouldn't be surprised if there are multiple orders of magnitude more phishing and hacked websites in 2021 than there was in 2004. Even with human checking, I doubt you'll even have 0% failure rate. Is the solution to just give up on blocking phishing sites?
The failure rate doesn't need to be 0%. If the solution is good, at least it'll be close to 0% which means that it'd be possible for the vendor to provide better support for the small number of mistakes so that they can be clearly explained to the affected party and rectified more quickly. If the failure rate is too high to make better support infeasible, then the current solution is not really a good one and we need to consider a revision.
> Are you implying that the list no longer has a good intention?
Most of the time I run into blocked sites they seem to be blocked because of copyright infringement, not phishing. The only phishing sites I've seen in the last year or so are custom tailored. For example, I had to deal with a compromised MS365 account last year where the bad actor spun up a custom phishing site using the logo, signature, etc. of the victim.
So IMHO the intentions are no longer pure plus the effect is diminished and being worked around.
The solution is for the legitimate sites that are driven out of business by Google AI to sue Google for tortuous interference and libel.
This helps one group and hurts another. If Google is liable for blocking potential malware and phishing pages, they'll either stop blocking it, or adjust their algorithm to strongly err on the side of allowing phishing sites.
Businesses become safer, but more regular people will get phished.
9 replies →
>Is the solution to just give up on blocking phishing sites?
IMHO yes. It's too much power for one company to wield. And especially a company with such questionable morals as Google. This cure is worse than the disease.
I thought you said, the curse is worse than the disease... which also would've made sense.
" Is the solution to just give up on blocking phishing sites?"
But maybe not do it by default on browser-level.
But if you do, then there really needs to be ways to combat wrong decisions in a timely manner.
The solution is simple: Liability. As soon as it becomes legally infeasible to let algorithms block people, it will stop happening.
Make it easy and affordable to submit legal complaints for tech misbehavior and make the penalties hurt.
Ah, so you suggest liability for the vendors of the software blocking websites, with, in practice [1], no liability for the operators of a compromised website, if it is phishing/malware?
This is a great approach, if your goal is to optimize for increasing the amount of dangerous crap on the web. But, eh, that's surely worth it, because the profitability of startups is more important then little things like the security of the average netizen...
[1] Even if you make the operators liable [2], in practice, you'll never be able to collect from most of them. Whereas the blacklist curators are a singular, convenient target...
[2] If you can demonstrate how the operators of compromised websites can be held liable for all the harm they cause, I will happily agree that we should do away with blacklists. Unfortunately, the technical and legislative solutions for this are much worse than the disease you are trying to treat.
Since phishing is not going to go anywhere with or without blacklists - for obvious reasons, e.g. lists can't cover everything, and you can't add sites to the list instantly - I am willing to tolerate a slight increase in fishing which is going to exist anyway in exchange for not having Google (or any other megacorp, or any other organization for that matter) as a gatekeeper of everybody's access to the internet. The potential for abuse of such power is much greater and much more dangerous than the danger from tiny increase of phishing.
3 replies →
This was the case with railroads too, only a few controlled the biggest and most transforming and business-integral tech of 1800s.
Prior to that it was those that controlled the printing presses.
...
History continues to repeat itself.