← Back to context

Comment by Implicated

5 days ago

> but for entirely non-technical reasons we had to exclude it

Able/willing to expand on this at all? Just curious.

They seem to have gone all-in on AI, for commits and ticket management. Not interested in interacting with that.

Otherwise, the built in admin on one-executable was nice, and support for tiered storage, but single node parallel write performance was pretty unimpressive and started throwing strange errors (investigating of which led to the AI ticket discovery).

Not the same person you asked, but my guess would be that it is seen as a chinese product.

  • RustFS appears to be very early-stage with no real distributed systems architecture: https://github.com/rustfs/rustfs/pull/884

    I'm not sure if it even has any sort of cluster consensus algorithm? I can't imagine it not eating committed writes in a multi-node deployment.

    Garage and Ceph (well, radosgw) are the only open source S3-compatible object storage which have undergone serious durability/correctness testing. Anything else will most likely eat your data.

    • Hi there, RustFS team member here! Thanks for taking a look.

      To clarify our architecture: RustFS is purpose-built for high-performance object storage. We intentionally avoid relying on general-purpose consensus algorithms like Raft in the data path, as they introduce unnecessary latency for large blobs.

      Instead, we rely on Erasure Coding for durability and Quorum-based Strict Consistency for correctness. A write is strictly acknowledged only after the data has been safely persisted to the majority of drives. This means the concern about "eating committed writes" is addressed through strict read-after-write guarantees rather than a background consensus log.

      While we avoid heavy consensus for data transfer, we utilize dsync—a custom, lightweight distributed locking mechanism—for coordination. This specific architectural strategy has been proven reliable in production environments at the EiB scale.