Comment by GuB-42
5 days ago
Same idea with the Crowdstrike bug, it seems like it didn't have much of on effect on their customers, certainly not with my company at least, and the stock quickly recovered, in fact doing very well. For me, it looks like nothing changed, no lessons learned.
what do you mean no lesson learned? seems like you haven't been paying attention..there's always a lesson learned
I believe they mean that Crowdstrike learned that they could screw up on this level and keep their customers....
That's true of a lot of "Enterprise" software. Microsoft enjoys success from abusing their enterprise customers what seems like daily at this point.
For bigger firms, the reality is that it would probably cost more to switch EDR vendors than the outage itself cost them, and up to that point, CrowdStrike was the industry standard and enjoyed a really good track records and reputation.
Depending on the business, there are long term contracts and early termination fees, there's the need to run your new solution along side the old during migration, there's probably years of telemetry and incident data that you need to keep on the old platform, so even if you switch, you're still paying for CrowdStrike for the retention period. It was one (major) issue over 10+ years.
Just like with CloudFlare, the switching costs are higher than outage cost, unless there was a major outage of that scale multiple times per year.
that IS the lesson! there are a million questions i can ask myself about those incidents. What dictates they can't ever screw up? sure it was a big screw up, but understanding the tolerances for screw ups is important to understanding how fast and loose you can play it. AWS has at least a big outage a year, whats the breaking point? risk and reward etc.
I've worked places where every little thing is yak shaved, and places where no one is even sure if the servers are up during working hours. Both jobs paid well.. both jobs had enough happy customers