← Back to context

Comment by mqus

5 years ago

> * GDPR/Privacy laws (The law requires the deletion of no-longer needed data. As soon as his account gets banned, the data is no longer needed for Googles business purposes (of providing service to him), so the deletion process can't be delayed.

This is simply wrong since the account is always "banned" and not "deleted". So the data is still there, not providing it is going against GDPR. Evidence for this is all the accounts that were unbanned and still had their data. Make the account read-only for all I care but don't think for a second that this data has to be deleted immediately (It definitely does not, there are reasons and reasonable ways for data to be retained for some time)

> * Untrustable employees. Google tries not to trust any employee with blanket access to your account. That means they couldn't even hire a bunch of workers to review these accounts - without being able to see the account private data, the employee wouldn't be able to tell good from bad accounts.

But somehow accounts get unbanned if they get enough attention... so this does not seem to be a problem.

> * Attacks on accounts. There are ways for someone who doesn't like you to get a Google account banned. Usually there are no logs kept (due to privacy reasons) that help identify what happened. Example method: Email someone a PDF file containing an illegal image, then trick them into clicking "save to drive". The PDF can have the image outside the border of the page so it looks totally normal.

So simultaneusly you can look at the image to ban the account but can't look at it to unban it? I get that the first one is done by algorithms and the second one presumably is not but calling this a privacy issue is laughable since you don't have to look at the content in the first place.

All of your points don't adress the issue of "The user does not even know why he was banned" at all. Luckily there are EU laws in the pipeline for that.

> But somehow accounts get unbanned if they get enough attention... so this does not seem to be a problem.

Having 10 highly paid long-tenured engineering employees who can look at small parts of a users account data is clearly better than having 10,000 call center workers be able to access user private data.

The end result is high profile incidents get handled in a way that it would be too risky to do for everyone.

Even with the small pool of engineers, there are incidents[1] where user data is used inappropriately. Would you make this pool larger?

[1]: https://www.businessinsider.com/google-engineer-stalked-teen...

  • Or how about this: when the engine triggers a ban it just notes the reason for the ban in the database, and then tells the user why the ban happened?

    I don't see why all the reasons above mean basic transparency can't happen.

    • Sadly this would make the system utterly trivial to gamify. Google have multiple billion accounts (Chrome has 2B users). I use "utterly trivial" here because "XYZ is likely" type events that might occur at xxx,xxx users translate to "sheer overwhelming force of statistics" when you get to x,xxx,xxx,xxx users - if you have 100,000 users and just 10 people successfully figure out how something works internally, scaling that to 1,000,000,000 users increases that pool of 10 people itself to 100,000. And a pool of 100,000 proactive and interested people is more than enough to create several thousand cottage industries, lots of competition, then one or two emerge at the top and become an exponential force, etc etc etc.

      1 reply →