Comment by p410n3
5 years ago
This happens again and again. I have had that happen to my twitter account. I see this regulary on HN.
My suspicion is that this is mostly happening because platforms that big like google or twitter rely very heavily on machine learning and other AI related technology to ban people. Because honestly, the amount of spam and abuse that are likely happening on these platforms has to be mind boggling high.
So I get why they would try to automate bans.
But after years and years of regular high profile news of false positives, one would think they eventually would change something.
I mean the guy had direct business with Google going on....
Why would they continue like that. Isn't there one single PR person at Google?
> So I get why they would try to automate bans.
The problems are less the automated bans but the missing human support after you got automated banned.
I you got banned go through a reasonable fast human review process then temporary reinstated a day later and fully reinstated a view days later it would be super annoying comparable with all google services being down for a day, but no where close to the degree of damage it causes now.
And lets be honest google could totally affort a human review process, even if they limit it to accounts which have a certain age and had been used from time to time (to make it much harder to abuse it).
But they are as much interested in this as they are in giving out reasons why you are banned, because if they would do you might be able to sue them for arbitrary discrimination against people who fall into some arbitrary category. Or similar.
What law makers should do is to require proper reasons to be given on service termination of any kind, without allowing an opt. out of this of any kind.
> And lets be honest google could totally affort a human review process
This is the part I find baffling. Why can’t they take 10 Google engineer’s worth of salaries, and hire a small army of overseas customer reps to handle cases like this? I realize that no customer support has been in Google’s DNA since the beginning, but this is such a weird hill to die on.
> This is the part I find baffling. Why can’t they take 10 Google engineer’s worth of salaries, and hire a small army of overseas customer reps to handle cases like this? I realize that no customer support has been in Google’s DNA since the beginning, but this is such a weird hill to die on.
My best guesses:
1. The number of automated scams/attacks and associated support requests is unbounded vs. bounded human labor so it's a losing investment.
2. Machine learning is sufficient for attackers to undo the anti-abuse work on a low number of false positives from human intervention. Throw small behavioral variants of banned scam/attack accounts at support and optimize for highest reinstatement rate. This abuse traffic will be the bulk of what the humans have to deal with.
3. They'd probably be hiring a non-negligable percentage of the same people who are running scams. The risk of insider abuse is untenable.
2 replies →
They could start with having support for all the accounts that make significant amounts of money for them. If an account makes Google >$100k a year then isn't it worth it to have support personnel that will handle the 2 tickets the account might have in a year? And the rest of the time they can focus on other tickets.
Shows the bias in machine learning. One simple parameter isn't added and the whole model is bullshit.
One parameter would be: Amount of money this customer has spend on our products.
Another would be: Active time since signup.
I'm pretty sure if "money spend > 0" is actually a legitimate threshold to remove a lot of spam, although not all. "money spend > 200" might to the trick though.
Forget ML, this is just business process mapping. If it's a payer-customer's account, issues should be sent to a human. Payer-customers should have access to a secondary channel (read: alternate phone number). Payer-customers Google contact(s) should be notified & included in the process.
As a general rule of thumb, if Google is struggling with a problem, it's not a tech problem.
This can be gamed. There are so many stolen credit card numbers and/or payments using Apple/Google pre-paid cards out there, so it's not difficult to automatically build accounts with this kind of 'reputation'.
Unfortunately the best way to do KYC is (still) human intervention (and use of data).
It is significantly harder to game though - companies succesfully offer behavioral monitoring for DLP products with far less data than the payment data Google has access to. Years of payments with a certain payment type? That's a pattern. Renting movies at certain time in the week? That's another... The truth of the matter is, somebody has to actually care to do this. From accounts of googlers I've read, that's not what the culture of Google is likely to result in though.
It can be gamed. But if the average value of a fake account is $100 and you set the threshold to be $200 it is no longer profitable.
Of course this still isn't a perfect metric. But it seems that banning people with accounts that have spend thousands of dollars and been active for many years should probably be avoided and this will significantly help that.
I mean if the account has spent >$50 you can probably afford a human review at the very least.
It won't change until they start bleeding enough users that it actually starts hurting them. In other words, when they mess up with someone "important enough" prepared to hold a serious grudge.
[EDIT: I still hold a grudge against DHL for 20 years ago listing my credit cards as "in transit to South Korea" while I was in Santa Cruz, waiting for them. If Google hits someone with an actual large following or sufficient clout in a large company, then they might just find that one day they do so to someone prepared to hold a 20 year grudge even if they eventually fix the immediate issue -- I'm not mad at DHL for the initial mistake, but for the amount of trouble and lies I had to deal with before they took it seriously]
These companies are maximizing their margins at our expense.
> "the amount of spam and abuse that are likely happening on these platforms has to be mind boggling high"
That is true, but the amount of money these platforms are making is mind bogglingly high, too. It's just that they decided that they will use low-cost automated methods in order to maximize margins. And as long as we all accept this, it's a good decision: more money!
But it is absolutely possible to do these things right, it just costs more.
> Because honestly, the amount of spam and abuse that are likely happening on these platforms has to be mind boggling high.
So hire more people. You can't argue that you can't do your work properly because your AI is not yet up to the task.
Agree. I find it odd that so many people bring up this argument, like these companies aren't sitting on piles of cash that could be invested in systemic, human-in-the-loop improvements. (Ok, maybe except Twitter)
You think humans are better at spotting abuse? Mods on Reddit demonstrate that such systems can be worse.
You've shown that it's possible for human moderation to be awful, you haven't shown that it's impossible for human moderation to work well. It is possible. HackerNews is a fine example.
Paid moderators can have their work supervised (a 'meta-moderation system') akin to Slashdot.
Perhaps, but at least you can talk to a human, which is another aspect of the problem and probably requires a similar solution (more humans).
Reddit also has AI that can shadowban you.
There's virtually no chance that the automated system that banned him knew the account belonged to someone with whom Stadia was doing business. Even if we assume there's a list of high profile people/accounts not to automatically disable, I can't see him being on it.
I think the point is that he has direct business with Google and yet _even he_ can't get his account unbanned.
If someone in that position is screwed, an average joe is most definitely screwed.
Notably, it also happened to an employee's husband:
https://news.ycombinator.com/item?id=24791357
5 replies →
It's possible to have a system that marks high profile accounts that shouldn't have automated actions applied ... that it appears Google doesn't have something like this is worrying.
Then again, if all high profile accounts were exempt from being auto banned then there would be even less chance of problems being brought to light.
they then become high profile targets for takeovers, and can run amok for too long before being disabled.
He is developer of Terraria including their official Youtube has been suspended. What does a guy have to do to become a true Scotsman? Fall acy?
Couldn't care less about twitter but if you use google for email/storage/docs etc then it's a real issue.
Email is how i do business or access to other websites and i store important documents in the cloud.
Like you i've seen the ban issue many times and even worse there's no customer support to help (just automated responses). Ever since i've been migrating away from google.
Maybe the solution is to not have single platforms that are this big.
Then move off. It's not the only solution.
There are alternatives to all these: Search, Email, Game streaming, Online doc editing, Etc
> Then move off.
Great, let's legislate that you can switch providers but you have to be able to keep your email address, like we did with phones.
10 replies →
> Then move off.
It works for you (as in, single person). Not for your friends and family who will ask you one day what to do about the account they lost.
We (technical people) know this happens and have seen it happen - it is on us to push for better solution than convincing one person at a time. Unless one prefers nihilism and watching the world burn of course.
6 replies →
I'd argue there's no real alternative to YouTube. There's got to be orders of magnitude more content there than all of its competitors combined.
5 replies →
Don't like it? Build your own.... Everything.
Popularity cannot be dictated, unless you're suggesting something like a regulation that would limit the total number of users a website is allowed to register.
Network effects are pretty handy, though.
And that is why "innocent until proved guilty" is such an important tenet of Western justice.
I think there is a simple solution: the "fail2ban" approach. Instead of banning, lock out users for some times (1 day). An AI system should make temporary changes to your IAM, and then report too often disabled guys to a human being
My suspicion is that this is mostly happening because platforms that big like google or twitter rely very heavily on machine learning and other AI related technology to ban people
Most likely yes. And the annoying thing is that they don't take into account different languages. The AI can recognize words, but not meaning.
A while ago some Dutch person tweeted: "Die Bernie Sanders toch." Die = that, in Dutch. But the AI obviously recognized the word (to) 'die' in English along with Bernie Sanders and just instantly drops the ban hammer. And it takes days,if not weeks to get an actual human to look at your case.
It was like a couple of weeks ago when an Android app got banned from the Play Store because they supported Advanced SubStation Alpha (ASS) subtitles and mentioned it in the description.
Yes and it's proof there is no such thing as "AI", just stupid pattern matching programmed by not very brillant people.
These are exactly the cases that worry me. ML / AI is not ready to be used like that. IDK if it ever will be, but they are already using it in production anyways.
It reminds me of when powerful institutions treat lie detectors or facial recognition systems as infallible.
Worse than that, these systems are perfect for decision laundering. You can make the system do arbitrary judgements, and blame negative consequences on "bias in the training data" or such.
regex != ML
They've applied ML to discern status updates from emails. They've applied ML to recognize speech fairly accurately... This kind of behavior seems far too unsophisticated for that. In the Twitter thread some people are suggesting it's something to do with politics. If that's so, then it likely means hands-on-keyboard-finger-on-scales thing a human would cause.
Their size insulates them from competition, which means less accountability.
We need to give them competition in the form of neutral and permissionless decentralized platforms. Such platforms should be the primary forum for commerce and communication, and privately owned permissioned platforms like Google should be small/bit players in comparison.
Right now the situation, in terms of whether the digital commons are primarily controlled by private companies or by public networks, is the opposite of what it should be.
> Why would they continue like that. Isn't there one single PR person at Google?
Does bad PR actually cost Google money? I'm not sure it does.
A bunch of advertisers claimed they were going to boycott Facebook, but they didn't stick with it, and it didn't meaningfully impact FB revenue.
I think the only think that will really dent Google at this point is privacy legislation, so the only PR they're worried about it is upsetting legislators -- not upsetting game devs.
Regardless of what's happening internally, I've come to the realization that Google has become the prototypical dystopian corporation. Yes, perhaps not the only one, and perhaps I should have come to this realization just sooner, but there it is.
Taking the long view, the apparent culture of "just don't give a sh*" isn't going to work for the human race, not in the long run.
Well, frankly speaking, as an individual or a small company, you do not matter much, especially in comparison to the cost to get the problem fixed. While an organization grows larger, it has to employ lots of processes which are obviously not perfect to make things work. When it grows even larger, it has to make changes to existing processes, abolish some processes become no longer appropriate and introduce new processes over existing processes to serve their business better. Unavoidably more and more automation are introduced and eventually AI. All those changes seem to be really minor and clear and works in most of cases. Yep, I meant most cases, not all cases. Then suddenly, something really should work per everything standard and process stopped working and no one really knows why. So here comes the question, if you are the decision maker, your system works for 99.999999% maybe even 99.999999999% of your customers but not for those 1 maybe 10 customers, are you going to spend $$$$$$$$$$$$$$$$$$$$$ to get it fixed?
>Why would they continue like that.
Sheer hubris?
> Sheer hubris?
I would actually lean towards organizational incompetence. There is just too much human brain mass at Google to say the the company as a whole is screwing up this bad because of hubris. They are just at such a high complexity level that the disorganization is causing incompetent outcomes.
Yeah. Its super scary. The idea that an algorithm decides and no legal recourse, all decided by a company that has an illegal amount of control on what is supposed to be public space.
Imagine all the public squares to be owned by some company rather than the community. Now imagine an algorithm deciding to exclude you from that. To just ban you from participating in life.
It is taking too long for Google to understand what they need to do (to own public space, you must bring all the other public stuff too, like a legal system and proper rights protection and due dilligence).
We should kill the monster, while we still can. Break them up. They'll never learn. They'll keep destroying lifes. Less than 0.1% is acceptable statistical error, right? Just pray you are never the 0.1%.
please consider indieweb.org/POSSE to not loose your digital home, when huge organisations cancel tiny ones.
The big ones just cannot care about all, even if they really wanted. They had to be both onmiscient and omnipotent.
> Why would they continue like that. Isn't there one single PR person at Google?
Because they can afford it, they are a monopoly
There's another post making the rounds on HN at the moment: "Chatbots were the next big thing: what happened?"
this. this is why. bots, chat or otherwise, are not competent enough to replace humans.
Actually, sometime humans aren't that great at this either, if poorly paid/motivated/trusted.
I wrote an essay about big tech's aim for a monopoly on moderation.
https://www.remarkbox.com/remarkbox-is-now-pay-what-you-can....
I just gave up the last time my google account died. There's really little value in it at this point if you're not publishing apps and I would never build a business on one of their platforms for this reason anyway.
Google is above needing PR. Or at least they seem to think so.
Using an AI to automate banning is not an excuse for not being able to quickly redress the problem for a multi-billion dollar company such as Google.
In fact, it should probably be illegal for companies to automatically ban any of their users/customers with AI/algorithms without being able to respond to said complaints within 24h.
Bottom line is that Google should have better customer support, because it's not like they can't afford it.
The only reason they don't have good support is because they are a monopoly and monopolies don't care about the repercussions to any individual customer unless something is illegal.
Businesses that have this happen to them should call a lawyer and sue. That ought to get a human on the line...