Show HN: Durable Streams – Kafka-style semantics for client streaming over HTTP

2 months ago (github.com)

Hey, I'm a co-founder at ElectricSQL. Durable Streams is the delivery protocol underneath our Postgres sync engine—we've been refining it in production for 18 months.

The core idea: streams get their own URL and use opaque, monotonic offsets. Clients persist the last offset they processed and resume with "give me everything after X." No server-side session state, CDN-cacheable, plain HTTP.

We kept seeing teams reinvent this for AI token streaming and real-time apps, so we're standardizing it as a standalone protocol.

The repo has a reference Node.js server and TypeScript client. Would love to see implementations in other languages—there's a conformance test suite to validate compatibility.

Happy to dig into the design tradeoffs—why plain HTTP over WebSockets, etc.

Seems to be another great way to build local-first applications, which makes me think of CRDT, and come up with this silly question: what's the relationship between Durable Stream and CRDT, are they replacements for one another, or can they work well together?

  • They primarily serve different purposes, but they could complement each other.

    Durable Streams are a lightweight network protocol on top of standard HTTP. When you are building a synchronisation layer for let's say a local-first app, you need to not only exchange data over some lower-level protocol (i.e. HTTP / SSE / WS), but you also have to define a higher-level protocol on how the client & server are going to communicate - i.e. how to resume data fetching once the client reconnects, based on the last data that the client received (~offset). Since the reconnect & offset should be automatically handled by the Durable Stream, you could just build your domain logic on top of it.

    CRDTs are primarily meant to resolve data conflicts, usually client-side, based on a defined conflict resolution strategy (i.e. last-writer-wins). Some of the CRDT libraries, like automerge, loro or yjs, also implement a networking layer to exchange the data between nodes (could be even P2P), meaning they already have a built-in mechanism for reconnection and offset (~send me data since X). However, nobody forces you to use their networking layer, meaning that with Durable Streams, you would have a good starting point to build your own.

    • Great answer! I was always confused about how CRDTs were transferred. Like you said, existing implementations often come with their own in-house networking solutions. Now it's totally clear, since CRDTs are only about data, it's no wonder their transfer methods differ. That makes Durable Stream a very good companion to work with CRDTs—the boundaries are clear, and they complement each other perfectly.

      I also feel that I could give Durable Stream's protocol spec to a coding agent, and it could blend into the best suited implementation for my current project (say, a Go repo). The simple yet sophisticated spec is more valuable than a bunch of SDKs.