← Back to context

Comment by pikdum

6 hours ago

Wasn't aware that there was noticeably higher latency between availability zones in the same AWS region. Kinda thought the whole point was to run replicas of your application in multiple to achieve higher availability.

They also charge you like 1c/GB for traffic egress between the zones. To top it off there are issues with AWS loadbalancers in multi-zone setups. Ultimately i've come to the conclusion that large multi-zonal clusters is a mistake. Do several single-zone disposable clusters if you want zone redundancy.

  • At $WORK traffic between zones ($REGION-DataTransfer-Regional-Bytes) is our second largest cost on our AWS bill, more than our EC2/EKS cost. It adds up to mid six figures each year. We try to minimize this where it is easy to do so. For example, our EKS pods perform reads against RDS read replicas in the same AZ only, but you're out of luck for writes to the primary instance. To reduce this in any significant way can eat up a lot of time, and for us, the cost is enough to be painful but not enough to dedicate an engineer to fixing.

    This is precisely how Amazon's bread is buttered. An outage affecting an entire AZ is rare enough that I would feel pretty happy making all our clusters single-AZ, but it would be a fool's errand for me to convince management to go against Amazon's official recommendations.

I was surprised to. Of course it makes sense when you look at it hard enough, two seperate DCs won't have the same latency than internal DC communication. It might have the same physical wire-speed, but physical distance matter.