Comment by testplzignore
3 hours ago
> They really need to figure out a way to correlate global configuration changes to the errors they trigger as fast as possible.
This is what jumped out at me as the biggest problem. A wild west deployment process is a valid (but questionable) business decision, but if you do that then you need smart people in place to troubleshoot and make quick rollback decisions.
Their timeline:
> 08:47: Configuration change deployed and propagated to the network
> 08:48: Change fully propagated
> 08:50: Automated alerts
> 09:11: Configuration change reverted and propagation start
> 09:12: Revert fully propagated, all traffic restored
2 minutes for their automated alerts to fire is terrible. For a system that is expected to have no downtime, they should have been alerted to the spike in 500 errors within seconds before the changes even fully propagated. Ideally the rollback would have been automated, but even if it is manual, the dude pressing the deploy button should have had realtime metrics on a second display with his finger hovering over the rollback button.
Ok, so they want to take the approach of roll forward instead of immediate rollback. Again, that's a valid approach, but you need to be prepared. At 08:48, they would have had tens of millions of "init.lua:314: attempt to index field 'execute'" messages being logged per second. Exact line of code. Not a complex issue. They should have had engineers reading that code and piecing this together by 08:49. The change you just deployed was to disable an "execute" rule. Put two and two together. Initiate rollback by 08:50.
How disconnected are the teams that do deployments vs the teams that understand the code? How many minutes were they scratching their butts wondering "what is init.lua"? Are they deploying while their best engineers are sleeping?
I see lots of people complaining about this down time but in actuality is it really that big a deal to have 30 minutes of down time or whatever. It's not like anything behind cloudflare is "mission critical" in the sense that lives are at stake or even a huge amount of money is at stake. In many developed countries the electric power service has local down times on occasion. That's more important than not being able to load a website. I agree if CF is offering a certain standard of reliability and not meeting it then they should offer prorated refunds for the unexpected down time but otherwise I am not seeing what the big deal is here.
30 minutes of downtime is fine for most things, including Amazon.
30 minutes of unplanned downtime for infrastructure is unacceptable; but we’re tending to accept it. AWS or Cloudflare have positioned themselves as The Internet so they need to be held to a higher standard.
> about this down time but in actuality is it really that big a deal to have 30 minutes of down time or whatever. It's not like anything behind cloudflare is "mission critical" in the sense that lives are at stake or even a huge amount of money is at stake.
This reads like sarcasm. But I guess it is not. Yes, you are a CDN, a major one at that. 30 minutes of downtime or "whatever" is not acceptable. I worked at traffic teams of social networks that looked at themselves as that mission critical. CF is absolutely that critical and it is definitely lives at stake.