← Back to context

Comment by stmw

24 days ago

Every time someone builds one of these things and skips over "overcomplicated theory", aphyr destroys them. At this point, I wonder if we could train an AI to look over a project's documentation, and predict whether it's likely to lose commmitted writes just based on the marketing / technical claims. We probably can.

/me strokes my long grey beard and nods

People always think "theory is overrated" or "hacking is better than having a school education"

And then proceed to shoot themselves in the foot with "workarounds" that break well known, well documented, well traversed problem spaces

  • certainly a narrative that is popular among the grey beard crowd, yes. in pretty much every field i've worked on, the opposite problem has been much much more common.

    • What fields? Cargo culting is annoying and definitely leads to suboptimal solutions and sometimes total misses, but I’ve rarely found that simply reading literature on a thorny topic prevents you from thinking outside the box. Most people I’ve seen work who were actually innovating (as in novel solutions and/or execution) understood the current SOTA of what they were working on inside and out.

      2 replies →

  • I don't have a "school education" and I know plenty of theory, I certainly have read the papers cited in this test.

    • You might not have a school education, but you have educated yourself. It is unfortunately common to hear people complain that the theory one learns in school (or by determined self-study) is useless, which I think is what the geybeard comment you replied to intends to say.

      1 reply →

The only post in this thread that actually summarized the core findings of the study, namely:

- ACKed messages can be silently lost due to minority-node corruption.

- A single-bit corruption can cause some replicas to lose up to 78% of stored messages

- Snapshot corruption can propagate and lead to entire stream deletion across the cluster.

- The default lazy-fsync mode can drop minutes of acknowledged writes on a crash.

- A crash combined with network delay can cause persistent split-brain and divergent logs.

- Data loss even with “sync_interval = always” in presence of membership changes or partitions.

- Self-healing and replica convergence did not always work reliably after corruption.

…was not downvoted, but flagged... That is telling. Documented failure modes are apparently controversial. Also raises the question: What level of technical due diligence was performed by organizations like Mastercard, Volvo, PayPal, Baidu, Alibaba, or AT&T before adopting this system?

So what is next? Nominate NATS for the Silent Failure Peace Prize?

  • > Nominate NATS for the Silent Failure Peace Prize?

    One or two of the comments on GitHub by the NATS team in response to Issues opened by Kyle are also more than a bit cringeworthy.

    Such as this one:

    "Most of our production setups, and in fact Synadia Cloud as well is that each replica is in a separate AZ. These have separate power, networking etc. So the possibility of a loss here is extremely low in terms of due to power outages."

    Which Kyle had to call them out on:

    "Ah, I have some bad news here--placing nodes in separate AZs does not mean that NATS' strategy of not syncing things to disk is safe. See #7567 for an example of a single node failure causing data loss (and split-brain!)."

    https://github.com/nats-io/nats-server/issues/7564#issuecomm...

  • > What level of technical due diligence was performed by organizations like Mastercard, Volvo, PayPal, Baidu, Alibaba, or AT&T before adopting this system?

    I have to note the following as a NATS fan:

      - I am horrified at Jespen's reliability findings, however they do vindicate certain design decisions I made in the past
    
      - 'Core NATS' is really mostly 'redis pubsub but better' and Core NATS is honestly awesome, low friction middleware. I've used it as part of eventing systems in the past and it works great.
    
      - FWIW, There's an MQTT bridge that requires Jetstream, but if you're just doing QoS 0 you can work around the other warts.
    
      - If you use Jetstream KV as a cache layer without real persistence (i.e. closer to how one uses Redis KV where it's just memory backed) you don't care about any of this. And again Jetstream KV IMO is better than Redis KV since they added TTL.
    

    All of that is a way to say, I'd bet a lot of them are using Core NATS or other specific features versus something like JetStream.

    tl;dr - Jetstream's reliability is horrifying apparently but I stand by the statement that Core NATS and Ephermal KV is amazing.

You can have DeepWiki literally scan the source code and tell you:

> 2. Delayed Sync Mode (Default)

> In the default mode, writes are batched and marked with needSync = true for later synchronization filestore.go:7093-7097 . The actual sync happens during the next syncBlocks() execution.

However, if you read DeepWiki's conclusion, it is far more optimistic than what Aphyr uncovered in real-world testing.

> Durability Guarantees

> Even with delayed fsyncs, NATS provides protection against data loss through:

> 1. Write-Ahead Logging: Messages are written to log files before being acknowledged

> 2. Periodic Sync: The sync timer ensures data is eventually flushed to disk

> 3. State Snapshots: Full state is periodically written to index.db files filestore.go:9834-9850

> 4. Error Handling: If sync operations fail, NATS attempts to rebuild state from existing data filestore.go:7066-7072"

https://deepwiki.com/search/will-nats-lose-uncommitted-wri_b...

It's not even "overcomplicated theory" it's just "commit your writes before you say you committed your writes". It's actually way, way more complicated to try to build a system that tries to be correct without doing that.

You don’t even have to train an AI. At this point, in lieu of evidence to the contrary, we should default to “it loses committed writes”.