Comment by vlovich123

2 days ago

Someone else stated this implicitly, but with your reasoning no complex system is ever consistent with ongoing changes. From the perspective of one of many concurrent writers outside of the database there’s no consistency they observe. Within the database there could be pending writes in flight that haven’t been persisted yet.

That’s why these consistency models are defined from the perspective of “if you did no more writes after write X, what happens”.

They are consistent (the C in ACID) for a particular transaction ID / timestamp. You are operating on a consistent snapshot. You can also view consistent states across time if you are archiving log.

"... with your reasoning no complex system is ever consistent with ongoing changes. From the perspective of one of many concurrent writers outside of the database there’s no consistency they observe."

That was kind of my point. We should stop callings such systems consistent.

It is possible, however, to build a complex system, even with "event sourcing", that has consistency guarantees.

Of course your comment has the key term "outside of the database". You will need to either use a database or built a homegrown system that has similar features as databases do.

One way is to pipe everything through a database that enforces the consistency. I have actually built such an event sourcing platform.

Second way is to have a reconciliation process that guarantees consistency at certain point of time. For example, bank payments systems use reconciliation to achieve end-of-day consistency. Even those are not really "guaranteed" to be consistent, just that inconsistencies are sufficiently improbable, so that they can be handled manually and with agreed on timeouts.