I think if you need something more reliable than us-east-1 that you should be hosting on prem in facilities you own and operate.
There aren't that many businesses that truly can't handle the worst case (so far) AWS outage. Payment processing is the strongest example I can come up with that is incompatible with the SLA that a typical cloud provider can offer. Visa going down globally for even a few minutes might be worse than a small town losing its power grid for an entire week.
It's a hell of a lot easier to just go down with everyone else, apologize on Twitter, and enjoy a forced snow day. Don't let it frustrate you. Stay focused on the business and customer experience. It's not ideal to be down, but there are usually much bigger problems to solve. Chasing an extra x% of uptime per year is usually not worth a multicloud/region clusterfuck. These tend to be even less resilient on average.
It’s kind of amazing that after nearly 20 years of “cloud”, the worst case so far still hasn’t been all that bad. Outages are the mildest type of incident. A true cloud disaster would be something like a major S3 data loss event, or a compromise of the IAM control plane. That’s what it would take for people to take multi-region/multi-cloud seriously.
I like this a lot, this is a great comparison for hetzner american offerings since it's not big enough for them to even bother investing much into it so there's not that many complains about it. People just dumping it (me included) after discovering the amount of random issues it has probably also doesn't help.
if you are using hetzner: avoid everything other than fra region, ideally pray that you are placed in the newer part of the datacenter since it has the upgraded switching spine I haven't seen the old one in a bit so they might have deprecated it entirely.
Hetzner does not have any "fra region". They have Helsinki, Falkstien and Nuremberg in Europe. None of them which has any issues as far as I know. They used to have some issues with the very old stuff in Falkstien.
A sound banker, alas, is not one who foresees danger and avoids it, but one who, when he is ruined, is ruined in a conventional and orthodox way along with his fellows, so that no one can really blame him. JM Keynes
Ass covering-wise, you are probably better off going down with everyone else on us-east-1. The not so fun alternative: being targeted during an RCA explaining why you chose some random zone no one ever heard of.
Places nobody's ever heard of like "Ohio" or "Oregon"?
Yeah, I'm not worried about being targeted in an RCA and pointedly asked why I chose a region with way better uptime than `us-tirefire-1`.
What _is_ worth considering is whether your more carefully considered region will perform better during an actual outage where some critical AWS resource goes down in Virginia, taking my region with it anyway.
IIRC, some AWS services are solely deployed on and/or entirely dependent on us-east-1. I don't recall which ones, but I very distinctly remember this coming up once.
I find it funny that we see complaints about why software quality has got worse alongside people advocating to choose objectively risky AWS regions for career risk and blame minimisation reasons.
They are for the same reason. How do customers react to either? If us-east-1 fails, nobody complains. If Microsoft uses a browser to render components on Windows and eats all of your RAM, nobody complains.
Istr major resource unavailability in US-East-2 during one of the big US-East-1 outages because people were trying to fail over. Then a week later there was a US-East-2 outage that didn't make the news.
So if you tried to be "smart" and set up in Ohio you got crushed by the thundering herd coming out of Virginia and then bit again because aws barely cares about you region and neither does anyone else.
The truth is Amazon doesn't have any real backup for Virginia. They don't have the capacity anywhere else and the whole geographic distribution scheme is a chimera.
This is an interesting point. As recently as mid-2023 us-east-2 was 3 campuses with a 5 building design capacity at each. I know they've expanded by multiples since, but us-east-1 would still dwarf them.
Makes one wonder, does us-west-2 have the capacity to take on this surge?
> being targeted during an RCA explaining why you chose some random zone no one ever heard of.
“Duh, because there’s an AZ in us-east-1 where you can’t configure EBS volumes for attachment to fargate launch type ECS tasks, of course. Everybody knows that…”
how about following the well-architected framework and building something with a suitable level of 9s where you can justify your decisions during a blameless postmortem (please stamp your buzzword bingo card for a prize.)
This to me was the real lesson of the outage. A us-east-1 outage is treated like bad weather. A regional outage can be blamed on the dev. us-east-1 is too big to get blamed, which is why it should be the region of choice for an employee.
us-east-2 is objectively a better region to pick if you want US east, yet you feel safer picking use1 because “I’m safer making a worse decision that everyone understands is worse, as long as everyone else does it as well.”
Cackling while reading this visiting my family in Northern Virginia for the holidays. Despite it being a prominent place in the history of the web, it's still the least reliable AWS region (for now).
us-east-1 is often a lynchpin for services worldwide. Something hinky happening to dns or dynamodb in us-east-1 will probably wreck your day regardless of where you set up shop.
I stopped deploying to a single region for production years ago, so I don’t really have a horse in this region comparison race. That said, I’ve seen network level issues in every region I use — nothing like the big outage, but issues that may disrupt a service. Designing for how the world is rather than how I wish it was makes a lot of sense to me.
Yes, it's the least reliable. Thanks for summarizing the data here to illustrate the issue.
It's often seen as the "standard" or "default" region to use when spinning up new US-based AWS services, is the oldest AWS center, has the most interconnected systems, and likely has the highest average load.
It makes sense that us-east-1 has reliability problems, but I wish Amazon was a little more upfront about some of the risks when choosing that zone.
I think part of this is that Status Page updates require AWS engineers to post them. In the smaller Tokyo (ap-northeast-1) region, we've had several outages which didn't appear on the status page.
I don't know if this is still true, or related, but that area used to be (Circa 10-30 years ago) very highly prone to power outages. The reason was lots of old trees near the lines that would inevitably fall; blackouts in local areas were common due to this.
That's an interesting data point, but I don't think it's relevant. The datacenters themselves are designed with a high level of power reliability and can island themselves if needed.
Us-east-1 is far far from least reliable. It’s one of the more reliable ones. Smaller regions tend to have more reliability issues affecting the entire AZ.
This analysis is skewed due to the major incident in 2025. What was the data for 2024 and over the last, say, 5 years? So the proclamation of least reliable of us-east-1 is based on 1 year of data, and it’s probably fair to say that at least last 3 years if not 5 are a better predictor of reliability.
us-east-1 also hosts some special things, so it will have more services to lose.
I think if you need something more reliable than us-east-1 that you should be hosting on prem in facilities you own and operate.
There aren't that many businesses that truly can't handle the worst case (so far) AWS outage. Payment processing is the strongest example I can come up with that is incompatible with the SLA that a typical cloud provider can offer. Visa going down globally for even a few minutes might be worse than a small town losing its power grid for an entire week.
It's a hell of a lot easier to just go down with everyone else, apologize on Twitter, and enjoy a forced snow day. Don't let it frustrate you. Stay focused on the business and customer experience. It's not ideal to be down, but there are usually much bigger problems to solve. Chasing an extra x% of uptime per year is usually not worth a multicloud/region clusterfuck. These tend to be even less resilient on average.
> worst case (so far)
It’s kind of amazing that after nearly 20 years of “cloud”, the worst case so far still hasn’t been all that bad. Outages are the mildest type of incident. A true cloud disaster would be something like a major S3 data loss event, or a compromise of the IAM control plane. That’s what it would take for people to take multi-region/multi-cloud seriously.
> A true cloud disaster would be something like a major S3 data loss event
So like the OVH data center fire back in 2021?
1 reply →
I mean, EBS went offline and people were ok to continue using AWS…
https://arstechnica.com/information-technology/2011/04/amazo...
There are only two kinds of cloud regions: the ones people complain about and the ones nobody uses
I like this a lot, this is a great comparison for hetzner american offerings since it's not big enough for them to even bother investing much into it so there's not that many complains about it. People just dumping it (me included) after discovering the amount of random issues it has probably also doesn't help.
if you are using hetzner: avoid everything other than fra region, ideally pray that you are placed in the newer part of the datacenter since it has the upgraded switching spine I haven't seen the old one in a bit so they might have deprecated it entirely.
Hetzner does not have any "fra region". They have Helsinki, Falkstien and Nuremberg in Europe. None of them which has any issues as far as I know. They used to have some issues with the very old stuff in Falkstien.
1 reply →
Yeah, I was often the single source of reporting Claude outages (or even missing support completely) on less commonly used Amazon Bedrock regions.
A sound banker, alas, is not one who foresees danger and avoids it, but one who, when he is ruined, is ruined in a conventional and orthodox way along with his fellows, so that no one can really blame him. JM Keynes
That is incredibly appropriate.
Eu-west-1 is miles better and is huge
Ass covering-wise, you are probably better off going down with everyone else on us-east-1. The not so fun alternative: being targeted during an RCA explaining why you chose some random zone no one ever heard of.
Places nobody's ever heard of like "Ohio" or "Oregon"?
Yeah, I'm not worried about being targeted in an RCA and pointedly asked why I chose a region with way better uptime than `us-tirefire-1`.
What _is_ worth considering is whether your more carefully considered region will perform better during an actual outage where some critical AWS resource goes down in Virginia, taking my region with it anyway.
IIRC, some AWS services are solely deployed on and/or entirely dependent on us-east-1. I don't recall which ones, but I very distinctly remember this coming up once.
5 replies →
I find it funny that we see complaints about why software quality has got worse alongside people advocating to choose objectively risky AWS regions for career risk and blame minimisation reasons.
This was always the case. The OG saying was “no one got fired for buying IBM”. Then it was changed to Microsoft. And so on..
They are for the same reason. How do customers react to either? If us-east-1 fails, nobody complains. If Microsoft uses a browser to render components on Windows and eats all of your RAM, nobody complains.
3 replies →
Istr major resource unavailability in US-East-2 during one of the big US-East-1 outages because people were trying to fail over. Then a week later there was a US-East-2 outage that didn't make the news.
So if you tried to be "smart" and set up in Ohio you got crushed by the thundering herd coming out of Virginia and then bit again because aws barely cares about you region and neither does anyone else.
The truth is Amazon doesn't have any real backup for Virginia. They don't have the capacity anywhere else and the whole geographic distribution scheme is a chimera.
This is an interesting point. As recently as mid-2023 us-east-2 was 3 campuses with a 5 building design capacity at each. I know they've expanded by multiples since, but us-east-1 would still dwarf them.
Makes one wonder, does us-west-2 have the capacity to take on this surge?
> being targeted during an RCA explaining why you chose some random zone no one ever heard of.
“Duh, because there’s an AZ in us-east-1 where you can’t configure EBS volumes for attachment to fargate launch type ECS tasks, of course. Everybody knows that…”
:p
how about following the well-architected framework and building something with a suitable level of 9s where you can justify your decisions during a blameless postmortem (please stamp your buzzword bingo card for a prize.)
We vibe code everything in flavor of the month node frameworks, tyvm, because elixir is too hard to hire for (or some equally inane excuse)
2 replies →
This to me was the real lesson of the outage. A us-east-1 outage is treated like bad weather. A regional outage can be blamed on the dev. us-east-1 is too big to get blamed, which is why it should be the region of choice for an employee.
Bizarre way of making decisions.
us-east-2 is objectively a better region to pick if you want US east, yet you feel safer picking use1 because “I’m safer making a worse decision that everyone understands is worse, as long as everyone else does it as well.”
3 replies →
Why aren't you using IBM cloud?
3 replies →
Bandwidth cost is also another major reason.
Cackling while reading this visiting my family in Northern Virginia for the holidays. Despite it being a prominent place in the history of the web, it's still the least reliable AWS region (for now).
Its nice to know that where I grew up is Too Big to Fail lol.
At 34 hours of downtime that's two nines of uptime
At this point my garage is tied for reliability with us-east-1 largely because it got flooded 8 month ago.
I intentionally avoid using us-east-1 for anything, since I’ve seen so many outages.
us-east-1 is often a lynchpin for services worldwide. Something hinky happening to dns or dynamodb in us-east-1 will probably wreck your day regardless of where you set up shop.
Answer these questions:
- Is X region and its services covered by a suitable SLA? https://aws.amazon.com/legal/service-level-agreements/
- Does X region have all the explicit services you need? (note things like certs and iam are "global" so often implicitly US-East-1)
- What are your PoP latency requirements?
- Do you have concerns about sovereign data: hosting, ingress, and egress? https://pages.awscloud.com/rs/112-TZM-766/images/AWS_Public_...
I stopped deploying to a single region for production years ago, so I don’t really have a horse in this region comparison race. That said, I’ve seen network level issues in every region I use — nothing like the big outage, but issues that may disrupt a service. Designing for how the world is rather than how I wish it was makes a lot of sense to me.
Yes, it's the least reliable. Thanks for summarizing the data here to illustrate the issue.
It's often seen as the "standard" or "default" region to use when spinning up new US-based AWS services, is the oldest AWS center, has the most interconnected systems, and likely has the highest average load.
It makes sense that us-east-1 has reliability problems, but I wish Amazon was a little more upfront about some of the risks when choosing that zone.
Nobody ever got fired for connecting to us-east-1
The sorting for the "Duration" column appears to be lexicographical, not numeric.
Of course it is, all of the NSA men in the middle add a lot of overhead that can interfere with regular operations.
I think part of this is that Status Page updates require AWS engineers to post them. In the smaller Tokyo (ap-northeast-1) region, we've had several outages which didn't appear on the status page.
The test environment is deployed on us-east-1, whereas the production environment deployed on us-west-2 on our side.
Glad to use us-west-2 for reasons.
I don't know if this is still true, or related, but that area used to be (Circa 10-30 years ago) very highly prone to power outages. The reason was lots of old trees near the lines that would inevitably fall; blackouts in local areas were common due to this.
That's an interesting data point, but I don't think it's relevant. The datacenters themselves are designed with a high level of power reliability and can island themselves if needed.
We've started to see some rather interesting consequences for grid reliability: https://blog.gridstatus.io/byte-blackouts-large-data-center-...
Us-east-1 is far far from least reliable. It’s one of the more reliable ones. Smaller regions tend to have more reliability issues affecting the entire AZ.
This analysis is skewed due to the major incident in 2025. What was the data for 2024 and over the last, say, 5 years? So the proclamation of least reliable of us-east-1 is based on 1 year of data, and it’s probably fair to say that at least last 3 years if not 5 are a better predictor of reliability.
us-east-1 also hosts some special things, so it will have more services to lose.
We get constant resource issues in GCP’s us-east4 region
Yes
I searched for it, and did not find, the word "backhoe."
Big fail.
I have said for years, never ascribe to terrorism what can be attributed to some backhoe operator in Ashburn, Virginia.
We got a lotta backhoes in northern Virginia.