Comment by wowamit
2 days ago
Eventual consistency arises from necessity -- a need to prioritise AP more. Not every application needs strong consistency as a primary constraint. Why would you optimise for that, at the cost of availability, when eventual consistency is an acceptable default?
Practically, the difference in availability for typical internet connected application is very small. Partitions do happen, but in most cases its possible to route user traffic around them, given the paths that traffic tends to take into large-scale data center clusters (redundant, typically not the same paths as the cross-DC traffic). The remaining cases do exist, but are exceedingly rare in practice.
Note that I’m not saying that partitions don’t happen. They do! But in typical internet connected applications the cases where a significant proportion of clients is partitioned into the same partition as a minority of the database (i.e. the cases where AP actually improves availability) are very rare in practice.
For client devices and IoT, partitions off from the main internet are rare, and there local copies of data are a necessity.
Because the incidence and cost of mistaken under-consistency are both generally higher than those of mistaken over-consistency—especially at the scale where people would need to rely on managed off-the-shelf services like aurora instead of being able to build their own.
I would be hesitant to generalise that. There is an inherent tension with its impact on the larger availability of your system. We can't analyse the effect in isolation.
Most systems can tolerate downtime but not data incorrectness. Also, eventual consistency is a bit of a misnomer because it implies that the only cost you’re paying is staleness. In reality these systems are “never consistent” because you often give up guarantees like full serializability making you susceptible to outright data corruption.
It might arise from necessity, but what I see in practice that even senior developers deprioritize consistency on platforms and backends apparently just because the scalability and performance is so fashionable.
That pushes the hard problem of maintaining a consistent experience for the end users to the frontend. Frontend developers are often less experienced.
So in practice you end up with flaky applications, and frontend and backend developers blaming each other.
Most systems do not need "webscale". I would challenge the idea that "eventual consistency" is an acceptable default.