Comment by dathinab

5 years ago

> So I get why they would try to automate bans.

The problems are less the automated bans but the missing human support after you got automated banned.

I you got banned go through a reasonable fast human review process then temporary reinstated a day later and fully reinstated a view days later it would be super annoying comparable with all google services being down for a day, but no where close to the degree of damage it causes now.

And lets be honest google could totally affort a human review process, even if they limit it to accounts which have a certain age and had been used from time to time (to make it much harder to abuse it).

But they are as much interested in this as they are in giving out reasons why you are banned, because if they would do you might be able to sue them for arbitrary discrimination against people who fall into some arbitrary category. Or similar.

What law makers should do is to require proper reasons to be given on service termination of any kind, without allowing an opt. out of this of any kind.

> And lets be honest google could totally affort a human review process

This is the part I find baffling. Why can’t they take 10 Google engineer’s worth of salaries, and hire a small army of overseas customer reps to handle cases like this? I realize that no customer support has been in Google’s DNA since the beginning, but this is such a weird hill to die on.

  • > This is the part I find baffling. Why can’t they take 10 Google engineer’s worth of salaries, and hire a small army of overseas customer reps to handle cases like this? I realize that no customer support has been in Google’s DNA since the beginning, but this is such a weird hill to die on.

    My best guesses:

    1. The number of automated scams/attacks and associated support requests is unbounded vs. bounded human labor so it's a losing investment.

    2. Machine learning is sufficient for attackers to undo the anti-abuse work on a low number of false positives from human intervention. Throw small behavioral variants of banned scam/attack accounts at support and optimize for highest reinstatement rate. This abuse traffic will be the bulk of what the humans have to deal with.

    3. They'd probably be hiring a non-negligable percentage of the same people who are running scams. The risk of insider abuse is untenable.

    • > They'd probably be hiring a non-negligable percentage of the same people who are running scams. The risk of insider abuse is untenable.

      This is the first time I hear someone making this claim. Is there prior evidence of this being a regular occurrence with outsourced customer support operations?

      1 reply →

  • They could start with having support for all the accounts that make significant amounts of money for them. If an account makes Google >$100k a year then isn't it worth it to have support personnel that will handle the 2 tickets the account might have in a year? And the rest of the time they can focus on other tickets.