Sharing details on a recent incident impacting one of our customers

8 months ago (cloud.google.com)

Given the level of impact that this incident caused, I am surprised that the remediations did not go deeper. They ensured that the same problem could not happen again in the same way, but that's all. So some equivalent glitch somewhere down the road could lead to a similar result (or worse; not all customers might have the same "robust and resilient architectural approach to managing risk of outage or failure").

Examples of things they could have done to systematically guard against inappropriate service termination / deletion in the future:

1. When terminating a service, temporarily place it in a state where the service is unavailable but all data is retained and can be restored at the push of a button. Discard the data after a few days. This provides an opportunity for the customer to report the problem.

2. Audit all deletion workflows for all services (they only mention having reviewed GCVE). Ensure that customers are notified in advance whenever any service is terminated, even if "the deletion was triggered as a result of a parameter being left blank by Google operators using the internal tool".

3. Add manual review for any termination of a service that is in active use, above a certain size.

Absent these broader measures, I don't find this postmortem to be in the slightest bit reassuring. Given the are-you-f*ing-kidding-me nature of the incident, I would have expected any sensible provider who takes the slightest pride in their service, or even is merely interested in protecting their reputation, to visibly go over the top in ensuring nothing like this could happen again. Instead, they've done the bare minimum. That says something bad about the culture at Google Cloud.

  • >> 1. When terminating a service, temporarily place it in a state where the service is unavailable but all data is retained and can be restored at the push of a button. Discard the data after a few days. This provides an opportunity for the customer to report the problem.

    This is so obviously "enterprise software 101" that it is telling Google is operating in 2024 without it.

    Since my new hire grad days, the idea of immediately deleting data that is no longer needed was out of the question.

    Soft deletes in databases with a column you mark delete. Move/rename data on disk until super duper sure you need to delete it (and maybe still let the backup remain). Etc..

    • It sounds like the problem is that the deletion was configured with an internal tool that bypassed all those kinds of protections -- that went straight to the actual delete. Including warnings to the customer, etc.

      Which is bizarre. Even internal tools used by reps shouldn't be performing hard deletes.

      And then I'd also love to know how the heck a default value to expire in a year ever made it past code review. I think that's the biggest howler of all. How did one person ever think there should be a default like that, and how did someone else see it and say yeah that sounds good?

    • > This is so obviously "enterprise software 101" that it is telling Google is operating in 2024 without it.

      My impression of GCP generally is that they've got some very smart people working on some very impressive advanced features and all the standard boring stuff nobody wants to do is done to the absolute bare minimum required to check the spec sheet. For all its bizarre modern enterprise-ness, I don't think Google ever really grew out of its early academic lab habits.

      4 replies →

  • Hard agree. They clearly were more interested in making clear that there's not a systemic problem in how GCP's operators manage the platform, which read strongly and alarmingly that there is a systemic problem in how GCP's operators manage the platform. The lack of the common sense measures you outline in their postmortem just tells me that they aren't doing anything to fix it.

    • “There’s no systemic problem.”

      Meanwhile, the operators were allowed to leave a parameter blank and the default was to set a deletion time bomb.

      Not systemic my butt! That’s a process failure, and every process failure like this is a systemic problem because the system shouldn’t allow a stupid error like this.

      4 replies →

  • > When terminating a service, temporarily place it in a state where the service is unavailable but all data is retained and can be restored at the push of a button. Discard the data after a few days. This provides an opportunity for the customer to report the problem

    Replacing actual deletion with deletion flags may lead to lead to other fun bugs like "Google Cloud fails to delete customer data, running afoul of EU rules". I suspect Google would err on the side of accidental deletions rather than accidental non-deletions: at least in the EU.

    • > I suspect Google would err on the side of accidental deletions rather than accidental non-deletions: at least in the EU.

      I certainly hope not, because that would be incredibly stupid. Customers understand the significance of different kinds of risk. This story got an incredible amount of attention among the community of people who choose between different cloud services. A story about how Google had failed to delete data on time would not have gotten nearly as much attention.

      But let us suppose for a moment that Google has no concern for their reputation, only for their legal liability. Under EU privacy rules, there might be some liability for failing to delete data on schedule -- although I strongly suspect that the kind of "this was an unavoidable one-off mistake" justifications that we see in this article would convince a court to reduce that liability.

      But what liability would they face for the deletion? This was a hedge fund managing billions of dollars. Fortunately, they had off-site backups to restore their data. If they hadn't, and it had been impossible to restore the data, how much liability could Google have faced?

      Surely, even the lawyers in charge of minimizing liability would agree: it is better to fail by keeping customers accounts then to fail by deleting them.

      1 reply →

    • A deletion flag is acceptable under EU rules. For example, they are acceptable as a means of dealing with deletion requests for data that also exists in backups. Provided that the restore process also honors such flags.

    • I highly doubt this was the reason. Google has similar deletion protection for other resources eg GCP projects are soft deleted for 30 days before being nuked.

    • Not really how it works, GDPR protects individuals and allow them to request deletion with the data owner. They need to then, within 60(?) days, respond to any request. Google has nothing to do with that beyond having to make sure their infra is secure. There even are provisions for dealing with personal data in backups.

      EU law has nothing to do with this.

  • It's a joke that they're not doing these things. How can you be an giant cloud provider and not think of putting safe guards around data deletion. I guess that realistically they thought of it many times but never implemented it because our costs money.

    • It’s probably because implementing such safeguards wouldn't help anyones promo packet.

      I really dislike that most of our major cloud infrastructure is provided by big tech rather than eg infrastructure vendors. I trust equinix a lot more than Google because thats all they do.

      5 replies →

  • I’m completely baffled by Google’s “postmortem” myself. Not only is it obviously insufficient to anyone that has operated online services as you point out, but the conclusions are full of hubris. I.e. this was a one time incident, it won’t happen again, we’re very sorry, but we’re awesome and continue to be awesome. This doesn’t seem to help Google Cloud’s face-in-palm moment.

  • FWIW, you're solving the bug by fiat, and that doesn't work. Surely analogs to all those protections are already in place. But a firm and obvious requirement of a software system that is capable of deleting data the ability to delete data. And if it can do it you can write a bug that short-circuits any architectural protection you put in place. Which is the definition of a bug.

    Basically I don't see this as helpful. This is just a form of the "I would never have written this bug" postmortem response. And yeah, you would. We all would. And do.

  • Can you imagine if there was no backup? Google would be in for to cover the +/- 200 billion in losses?

    This is why the smart people at Berkshire Hathaway don't offer Cyber Insurance: https://youtu.be/INztpkzUaDw?t=5418

    • I’d be very surprised if there wasn’t legalese in the contract/ToS about liability limitations etc. Would maybe expect it to be more than infrastructure costs for a big company custom contract, but probably not unlimited/as high as that, because it seems like such a blatant legal risk…

      Disclaimer: Am Googler who knows nothing real about this. This is rampant speculation on my part.

  • Could it have been a VMware expiration setting somewhere, and thus VMware itself deleted the customer’s tenant? If so then Google wouldn’t have a way to prove it won’t happen again except by always setting the expiration flag to “never” instead of leaving it blank

  • I would add one more -

    4. Add an option to auto-backup all the data from the account to the outside backup service of users choice.

    This would help not just with these kind of accidents, but also any kind of data corruption/availability issues.

    I would pay for this even for my personal gmail account.

  • I wouldn’t be surprised if VMware support is getting deprecated in GCP so they just don’t care - waiting for all customers to move off of it

    • My point is that if they had this problem in their VMware support, they might have a similar problem in one of their other services. But they didn't check (or at least they didn't claim credit for having checked, which likely means they didn't check).

If you're a GCP customer with a TAM, here's how to make them squirm. Ask them what protections GCP has in place, on your account, that would prevent GCP from inadvertently deleting large amounts of resources if GCP makes an administrative error.

They'll point to something that says this specific problem was alleviated (by deprecating the tool that did it, and automating more of the process), and then you can persist: we know you've fixed this problem, then followup: will a human review this large-scale deletion before the resources are deleted?

From what I can tell (I worked for GCP aeons ago, and am an active user of AWS for even longer) GCP's human-based protection measures are close to non-existent, and much less than AWS. Either way, it's definitely worth asking your TAM about this very real risk.

  • Give 'em hell.

    This motivates the TAM's to learn to work the system better. They will never be able to change things on their own, but sometimes you get escalation path promises and gentlemen's agreements.

    Enough screaming TAM's may eventually motivate someone high up to take action. Someday.

    • Way in which TAMs usually actually fix things:

         - Single customer complains loudly
         - TAM searches for other customers with similar concerns
         - Once total ARR is sufficient...
         - It gets added to dev's roadmap

      5 replies →

  • Pitch it as an opportunity for a human at Google to reach out and attempt to retain a customer when someone has their assets scheduled for deletion. Would probably get more traction internally, and has a secondary effect of ensuring it's clear to everyone that things are about to be nuked.

> ‘Google teams worked 24x7 over several days’

I don’t know if they get what the seven means there.

  • Ha, you're right it's a bit nonsensical if you take it completely literally.

    But of course x7 means working every day of the week. So you can absolutely work 24x7 from Thursday afternoon through Tuesday morning. It just means they didn't take the weekend off.

  • Perhaps the team members cycle so that the team was working on the thing without any night or weekend break. Which should be a standard thing at all times for a big project like this IMHO.

Wow - I was wrong. I thought this would have been something like terraform with a default to immediate delete with no recovery period or something. Still a default, but a third party thing and maybe someone in unisuper testing something and mis-scoping the delete.

Crazy that it really was google side. UniSuper must have been like WHAT THE HELL?

  • The article describes what happened and it had nothing to do with Unisuper. Google deployed the private cloud with an internal Google tool. And that internal Google tool configured things to auto-delete after a year.

  • One assumes they are getting a massive credit to their GCP bill, if not an outright remediation payment from Google.

    • The effusive praise for the customer in Google's statement makes me think they have free GCP for the next year, in exchange for not going public with their frustrations.

Sounds like a pretty thorough review in that they didn't stop at just an investigation of the specific tool / process, but also examined the rest for any auto deletion problems and also confirmed soft delete behavior.

They could have gone one step further by reviewing all cases of default behavior for anything that might be surprising. That said, it can be difficult to assess what is "surprising", as it's often the people who know the least about a tool/API who also utilize its defaults.

  • > and also confirmed soft delete behavior.

    Where exactly do they mention they have confirmed soft delete behavior systemically? All they said was they have ensured that this specific automatic deletion scenario can no longer happen, and it seems the main reason is because "these deployments are now automated". They were automated before, now they are even more automated. That does zero to assure me that their deletion mechanisms are consistently safe, only that there's no operator at the wheel any more.

  • Sounds more like some pants browning because incidents like this are a great reason to just use aws. Like come on:

    > After the end of the system-assigned 1 year period, the customer’s GCVE Private Cloud was deleted. No customer notification was sent because the deletion was triggered as a result of a parameter being left blank by Google operators using the internal tool, and not due a customer deletion request. Any customer-initiated deletion would have been preceded by a notification to the customer.

    ... Tada! We're so incompetent we let giant deletes happen with no human review. Thank god this customer didn't trust us and kept off-gcp backups or they'd be completely screwed.

    > There has not been an incident of this nature within Google Cloud prior to this instance. It is not a systemic issue.

    Translated to English: oh god, every aws and Azure salesperson has sent 3 emails to all their prospects citing our utter fuckup.

    • > Thank god this customer didn't trust us and kept off-gcp backups or they'd be completely screwed.

      Except that, from the article, the customer's backups that were used to recover were in GCP, and in the same region.

      5 replies →

I think it stretches credulity to say that the first time such an event happened was with a multi billion dollar mutual fund. In other words, I’m glad Unisuper’s problem was resolved, but there were probably many others which were small enough to ignore.

I can only hope this gives GCP the kick in the pants it needs.

  • GCVE (managed VMware) is a pretty obscure service, it's only used by the kind of multi billion dollar companies that want to lift and shift their legacy VMware fleets into the cloud as is.

  • A critical piece of the incident here was that this involved special customization that most customers didn't have or use, and which bypassed some safety checks, as a result it couldn't impact "normal" small customers.

  • I doubt it, because even a smaller customer would have taken this to the press, which would have picked up on it.

    "Google deleted our cloud service" is a major news story for a business of any size.

> The customer’s CIO and technical teams deserve praise for the speed and precision with which they executed the 24x7 recovery, working closely with Google Cloud teams.

I wonder if they just get praise in a blog post, or if the customer is now sitting on a king's ransom in Google Cloud credit.

  • There's no reality where a competent customer isn't going to ensure Google pays for this. I'd be surprised if they have a bill at all this year.

Uni super customer here in Aus. Didn’t know what it was but kept receiving emails every day when they were trying to resolve this. Only found out from news on what’s actually happened. Feels like they downplayed the whole thing as “system downtime”. Imagine something actually happened to people’s money and billions of dollars that were saved as their superannuation fund.

  • Did you get the same emails that other people did? An email almost every day with the words "disruption", "apologies", "frustration" used multiple times.

    A few days in an email titled: "A letter from the CEO"

    > I am writing to provide you with an update on the disruption to our services.

    > Firstly, let me begin by personally apologising for the outage, and thank you for your patience with our teams as they work around the clock to progressively get our systems back online.

    I'm really not sure that you could ask for clearer communication at the time or a clearer description of what went from inside Google Cloud

The initial statement on this incident was pretty misleading, it sounded like Google just accidentally deleted an entire GCP account. Reading this writeup I'm reassured, it sounds like they only lost a region's worth of virtual machines, which is absolutely something that happens (and that I think my systems can handle without too much trouble.) The original writeup made it sound like all of their GCS buckets, SQL databases, etc. in all regions were just gone which is a different thing and something I hope Google can be trusted not to do.

  • It was a red flag when UniSuper said their subscription was deleted, not their account. Many people jumped to conclusions about that.

The idea that you could have an automated tool delete services at the end of a term for a corporate/enterprise customer of this size and scale is absolutely absurd and inexcusable. No matter whether the parameter was set correctly or incorrectly in the first place. It should go through several levels of account manager/representative/management for manual review by a human at the google side before removal.

> It is not a systemic issue.

I kinda think the opposite. The culture that kept these kinds of problems at bay has largely left the company or stopped trying to keep it alive, as they no longer really care about what they're building.

Morale is real bad.

Interesting, but I draw different lessons from the post.

Use of internal tools. Sure, everyone has internal tools, but if you are doing customer stuff, you really ought to be using the same API surface as the public tooling, which at cloud scale is guaranteed to have been exercised and tested much more than some little dev group's scripts. Was that the case here?

Passive voice. This post should have a name attached to it. Like, Thomas Kurian. Palming it off to the anonymous "customer support team" still shows a lack of understanding of how trust is maintained with customers.

The recovery seems to have been due to exceptional good fortune or foresight on the part of the customer, not Google. It seems that the customer had images or data stored outside of GCP. How many of us cloud users could say that? How many of us cloud users have encouraged customers to move further and deeper along the IaaS > PaaS > SaaS curve, making them more vulnerable to total account loss like this? There's an uncomfortable lesson here.

  • > name attached

    Blameless (and nameless) postmortems are a cultural thing at google

    • That's great internally, but serious external communication with customers should have a name attached and responsibility accepted (i.e., "the buck stops here").

      1 reply →

    • So, I read your comment and realized that I think it made me misinterpret the comment you are replying to? I thereby wrote a big paragraph explaining how even as someone who cares about personal accountability within large companies, I didn't think a name made sense to assign blame here for a variety of reasons...

      ...but, then I realized that that isn't what is being asked for here: the comment isn't talking about the nameless "Google operators" that aren't being blamed, it is talking about the lack of anyone who wrote this post itself! There, I think I do agree: someone should sign off on a post like this, whether it is a project lead or the CEO of the entire company... it shouldn't just be "Google Cloud Customer Support".

      Having articles that aren't really written by anyone frankly makes it difficult for my monkey brain to feel there are actual humans on the inside whom I can trust to care about what is going on; and, FWIW, this hasn't always been a general part of Google's culture: if this had been a screw up in the search engine a decade ago, we would have gotten a statement from Matt Cutts, and knowing that there was that specific human who cared on the inside meant a lot to some of us.

The quality and rigor of GCP’s engineering is not even remotely close to that of an AWS or Azure and this incident shows it.

  • Honestly I've never worked anywhere that didn't have some kind of "war story" that was told about how some admin or programmer mistake resulted in the deletion of some vast swathe of data, and then the panic-driven heroics that were needed to recover.

    It shouldn't happen, but it does, all the time, because humans aren't perfect, and neither are the things we create.

    • Sure, it's the tone and content of their response that is worrying, more than the fact that an incident happened. An honest and transparent root cause analysis with technically sound and thorough mitigations, including changes in policy with regard to defaults. Their response seems like only the most superficial, bare-minimum approximation of an appropriate response to deleting a large customer's entire account. If I were on the incident response team I'd be strongly advocating for at lease these additional changes:

      Make deletes opt-in rather than opt out. Make all large-scale deletions have some review process with automated tests and a final human review. And not just some low-level technical employee, the account managers should have seen this on their dashboard somewhere long before it happened. Finally, undertake a thorough and systematic review of other services to look for similar failure modes, especially with regard anything which is potentially destructive and can conceivably be default-on in the absence of a supplied configuration parameter.

> “Google Cloud continues to have the most resilient and stable cloud infrastructure in the world.”

I don’t think GPC has that reputation compared to AWS or Azure. They aren’t at the same level.

> Google Cloud services have strong safeguards in place with a combination of soft delete, advance notification, and human-in-the-loop, as appropriate.

I mean, clearly not? By Google's own admission, in this very article, the resources were not soft deleted, no advance notification was sent, and there was no human in the loop for approving the automated deletion.

And Google's remediation items include adding even more automation for this process. This sounds totally backward to me. Am I missing something?

  • They automated away the part that had a human error (the internal tool with a field left blank), so that human error can't mess it up in the same way again. They should move that human labor to checking before tons of stuff gets deleted.

    • It seems to me that the default-delete is the real WTF. Why would a blank field result in a default auto-delete in any sane world. The delete should be opt-in not opt-out.

      1 reply →

Transparency for Google is releasing this incident report on the Friday of a long weekend [in the US].

I wonder if UniSuper was compensated for G’s fuckup.

“A single default parameter vs multibillion organization. The winner may surprise you!1”

Super motivating to have off cloud backup strategies...

  • Or cross-cloud. S3's ingress and storage costs are low, so that's an option when you don't use AWS.

It sounds like a giant PR piece about how Google is ready to respond to a single customer and is ready to work through their problems instead of creating an auto-response account suspension infinite loop nightmare.

> Google Cloud continues to have the most resilient and stable cloud infrastructure in the world.

As a company, Google has a lot of work to do about its customer care reputation regardless of what some metrics somewhere say about who's cloud is more reliable or not. I would not trust my business to Google Cloud, I would not trust anything with money to anything with the Google logo. Anyone who's been reading hacker news for a couple of years can remember how many times folks were asking for insider contacts to recover their accounts/data. Extrapolating this to a business would keep me up at night.

Using "TL;DR" in professional communication is a little unprofessional.

Some non-nerd exec is going to wonder what the heck that means.

  • It used to be called executive summary. It's brilliant but the kids found it a too formal phrase.

    IMHO almost every article should start with one.

What surprises me the most is that the customer managed to actually speak to a person from Google support. Must have been a pretty big private cloud deployment.

Edit: saw from the other replies that the customer was Unisuper. No wonder they managed to speak to an actual person.

Only if internal tools went thru the same scrutiny as public tools.

More often than not critical parameters or mis-configurations happen because of internal tools which work on unpublished params.

Internal tools should be treated as tech debt. You won't be able to eliminate issues but vastly reduce the surface area of errors.