Google Cloud accidentally deletes customer's account

9 months ago (theregister.com)

The cloud is someone else's computer. Convenience isn't always reliability and security.

For anyone who is semi-technical, or not as technical as they wished when it came to file storage wondering what they can do...

- Whether it's been corporate clients, small business, or individuals: I universally recommend everyone owning a small QNAP or Synology that is storage as a zero maintenance appliance, running software to maintain a 2 way sync of your cloud drives.

- Even if you're using Google Cloud, MSFT, etc, continue to use it as you please, just siphon off a local data backup in case the internet or the cloud is down. It also can make some kinds of disaster recovery much quicker. For example if you back up your computers locally to a NAS, and then from there to the cloud, it can be a lot more manageable.

- Throwing something like Tailscale on it makes it invisible and hyper secure to have on all your devices too. I typically never use the NAS' network connection tools, as it's likely a juicy target to break.

- Last but not least, setup a different format of backup, automatically. Backup your file appliance to something like sync.com, tarsnap.com, backblaze, etc to back up elsewhere that you can access.

I'd be happy to learn what anyone else is doing. As someone who lost a ton of data on a Microsoft account once, never twice.

*I am perfectly capable of building a NAS myself running an open source package, but storage should be an appliance at home to focus on other things.

The customer isn't exactly small either - "UniSuper is an Australian superannuation fund that provides superannuation services to employees of Australia's higher education and research sector. The fund has over 620,000 members and $120 billion in assets (funds under management and total member accounts at 7 July 2021)."[1]

https://en.wikipedia.org/wiki/UniSuper

> In the meantime, UniSuper's woes remain a lesson for companies leaping cloudwards. Someone clicking the wrong button, a previously unknown bug, an unforeseen series of events…

Here’s the thing though. Back in the Dark Ages before cloud services, everyone had to self host. We had a Data General AViiON server (DG/UX FTW!) in a dedicated room, and one of the first RAID arrays in Australia (predecessor of CLARiiON).

The cover was off the front of the array for some reason, and I had to squeeze past a coworker to get out of the room.

Sitting down at my Wyse60 terminal to do some work, a bunch of errors started appearing on my screen. Turns out, I had also “squeezed past” the power button on the RAID array, which was normally recessed - but not when the cover was off. I’d inadvertently shut down the whole system. Fortunately we were in preproduction so nobody really noticed. But it scared the crap out of me.

I knew someone at a small bank who told me they were also susceptible to similar problems. Just one big server, let’s hope it doesn’t go down.

Cloud services - especially IaaS - _enable_ diversification, and it sounds like UniSuper’s IT team should be congratulated for understanding what this really means in the context of networked services. Diverse networks, diverse suppliers, diverse geography.

Without cloud services, none of this is feasible for most SMEs.

There are plenty of things we can complain about with the cloud but “someone clicking the wrong button” is even more of a risk if you run your gear in house.

Whoever decided to put the data in a different cloud provider needs a raise and all the praise.

  • Not just data, backups were at an isolated place.

    With DB-as-a-service platforms becoming popular, I could imagine a setup where data and backups are at a single different provider (and in this setup one random account deletion could still mean data loss).

This is the website of the affected customer and their incident page:

"We're progressively restoring UniSuper services", https://www.unisuper.com.au/contact-us/outage-update

First entry is from May 2nd, indicating total outage. Today they enabled user sign ins and still write that "services will continue to be progressively brought online".

We provide IT support for small businesses, and it's almost impossible for them to understand that their 'cloud' data isn't backed up.

It takes a lot of education combined with making the backups not cost too much.

"Fortunately, UniSuper had backups at another cloud provider. Otherwise, a bad situation could have been oh so much worse."

Years ago we ran ad campaigns on reddit that said something like:

"Your data is stored on AWS and your backups are stored on AWS ... you're doing it wrong."

... and they got almost zero traction.

In fact, many people were angered by the suggestion that data at a major cloud provider could be at risk in any way.

  • I call it “Cloud 3-2-1” backup. You really should replicate your backups to a separate commercial provider, or even a local replica (depends on context). Most often, it’s to protect yourself from yourself.

    I’ve given up on trying to convince other people, though. Fortunately for me, unlike you, it’s not my bread and butter to do so.

    • When we migrated Netflix to AWS in 2009-2011 we setup a separate archive account on AWS for backups and also made an extra copy on GCP as our “off prem” equivalent. We also did a weekly restore from archive to refresh the test account data and make sure backups were working. I’ve documented that pattern many times, some people have even implemented it…

  • Valid thing to raise in the campaign, but also... AWS is not Google. There will often be several attempts at communication before an account is disabled and I'm not even sure what protections need to be lifted for actually deleting an account.

    Having worked with both clouds for several years, I'm intrigued by Google's services but struggle with trusting them enough to use for production.

    • It’s not only about AWS vs Google.

      It’s about insider and external threats. Operator error. System design failures.

      There’s a lot of ways to mess up your own account.

    • But doesn’t the problem only occur when these safeguards fail?

      I mean - I get that you’re saying Google has fewer checks and balances than AWS, but at some point it must be possible for the customer contact process to go wrong.

      It’s an extra slice of Swiss cheese, but it only makes it less likely, not impossible.

  • Bit unrelated question and please excuse me if it's going in a wrong way - any plans to have sort of multithread support in rsync? Being single thread limited is painfully slow. I'm concerned on data reads ( disks do much better being read in parallel ) and checksumming - doing 2nd pass of rsync over medium sized mysql db of around 2tb is literally slower than just something like tar ..|zstd | ssh "zstd -| tar -".

    Doing copy with rclone can be much much faster as well.

  • As always, there’s a trade off. By using native backups, it’s typically cheaper and easier, with external backups having their own risks. The risk of the cloud provider making a stupid mistake is so small that there are usually many other risks that are worth mitigating first.

    • It’s usually quite easy to replicate your backups to another completely independent AWS account (not in the same organization, different payment method, etc).

      You’re taking the backups anyway, why not at least store them somewhere that can’t be deleted by the same red button as the original data?

Sounds like Google moved to the next level, from deprecating services to deleting customer accounts. Next....Chrome will accidentally use your PC for running distributed generative AI workloads.