Comment by victormy

10 days ago

First off, I don't think there is anything wrong with MinIO closing down its open source. There are simply too many people globally who use open source without being willing to pay for it. I started testing various alternatives a few months ago, and I still believe RustFS will emerge as the winner after MinIO's exit. I evaluated Garage, SeaweedFS, Ceph, and RustFS. Here are my conclusions:

1. RustFS and SeaweedFS are the fastest in the object storage field.

2. The installation for Garage and SeaweedFS is more complex compared to RustFS.

3. The RustFS console is the most convenient and user-friendly.

4. Ceph is too difficult to use; I wouldn't dare deploy it without a deep understanding of the source code.

Although many people criticize RustFS, suggesting its CLA might be "bait," I don't think such a requirement is excessive for open source software, as it helps mitigate their own legal risks.

Furthermore, Milvus gave RustFS a very high official evaluation. Based on technical benchmarks and other aspects, I believe RustFS will ultimately win.

https://milvus.io/blog/evaluating-rustfs-as-a-viable-s3-comp...

  Maintainer of Milvus here. A few thoughts from someone who lives this every day:

  1. The free user problem is real, and AI makes it worse. We serve a massive community of free Milvus users — and we're grateful for them, they make the project what it is. But we also feel the tension MinIO is describing. You invest serious engineering effort into stability and bug fixes, and most users will never become paying customers. In the AI era this ratio only gets harder — copy with AI becomes easier than ever

  2. We need better object storage options. As a heavy consumer of object storage, Milvus needs a reliable, performant, and truly open foundation. RustFS is a solid candidate — we've been evaluating it seriously. But we'd love to see more good options emerge. If the ecosystem can't meet our needs long-term, we may have to invest in building our own.

  3. Open source licensing deserves a serious conversation. The Apache 2.0 / Hadoop-era model served us well, but cracks are showing. Cloud vendors and AI companies consume enormous amounts of open-source infrastructure, and the incentives to contribute back are weaker than ever. I don't think the answer is closing the source — but I also don't think "hope enterprises pay for support" scales forever. We need the community to have an honest conversation about what sustainable open source looks like in the AI era. MinIO's move is a symptom worth paying attention to.

  • GPL for open source and commercial license for the enterprise lawyers.

    Unfortunately, a majority seems to hate GPL these days even though it prevents most of the worst corporate behaviors.

  • Elvin here from RustFS. Appreciate the feedback, especially coming from the Milvus team—we’ve followed your work for a long time.

    You’re right about the "tension" in OSS. That’s exactly why we are pledging to keeping the RustFS core engine permanently open-source. We want to provide the solid, open foundation you mentioned so that teams like yours don't feel forced to build and maintain a storage layer from scratch.

    On the sustainability question—you've described the challenge better than most. We're still figuring out the right model, and I don't think anyone has a perfect answer yet. What we do know is that we're building something technically excellent first, and we're committed to doing it in a way that keeps the core open.

  • Huge thanks for your contributions to the open-source world! Milvus is an incredibly cool product and a staple in my daily stack.

    It’s been amazing to watch Milvus grow from its roots in China to gaining global trust and major VC backing. You've really nailed the commercialization, open-source governance, and international credibility aspects.

    Regarding RustFS, I think that—much like Milvus in the early days—it just needs time to earn global trust. With storage and databases, trust is built over years; users are naturally hesitant to do large-scale replacements without that long track record.

    Haha, maybe Milvus should just acquire RustFS? That would certainly make us feel a lot safer using it!

Garage installation is easy.

1. Download or build the single binary into your system (install like `/usr/local/sbin/garage`)

2. Create a file `/etc/garage.toml`:

  metadata_dir = "/data/garage/meta"
  data_dir = "/data/garage/data"
  db_engine = "sqlite"
  
  replication_factor = 1
  
  rpc_bind_addr = "[::]:3901"
  rpc_public_addr = "127.0.0.1:3901"
  rpc_secret = "[your rpc secret]"
  
  [s3_api]
  s3_region = "garage"
  api_bind_addr = "[::]:3900"
  root_domain = ".s3.garage.localhost"
  
  [s3_web]
  bind_addr = "[::]:3902"
  root_domain = ".web.garage.localhost"
  index = "index.html"
  
  [k2v_api]
  api_bind_addr = "[::]:3904"
  
  [admin]
  api_bind_addr = "[::]:3903"
  admin_token = "woG4Czw6957vNTXNfLABdCzI13NTP94M+qWENXUBThw="
  metrics_token = "3dRhgCRQQSxfplmYD+g1UTEZWT9qJBIsI56jDFy0VQU="

3. Start it with `garage server` or just have an AI write an init script or unit file for you. (You can pkill -f /usr/local/sbin/garage to shut it down.)

Also, NVIDIA has a phenomenal S3 compatible system that nobody seems to know about named AIStore: https://aistore.nvidia.com/ It's a bit more complex, but very powerful and fast (faster than MinIO - slightly less space efficient than MinIO because it maintains a complete copy of an object on a single node so that the object doesn't have to be reconstituted as it would on MinIO.) It also can be a proxy in front of other S3 systems, including AWS S3 or GCS etc and offer a single unified namespace to your clients.

IMO, Seaweedfs is still too much of a personal project, it's fast for small files, but keep good and frequent backups in a different system if you choose it.

I personally will avoid RustFS. Even if it was totally amazing, the Contributor License Agreement makes me feel like we're getting into the whole Minio rug-pull situation all over again, and you know what they say about doing the same thing and expecting a different result..

  • Garage is indeed an excellent project, but I think it has a few drawbacks compared to the alternatives: Metadata Backend: It relies on SQLite. I have concerns about how well this scales or handles high concurrency with massive datasets. Admin UI: The console is still not very user-friendly/polished. Deployment Complexity: You are required to configure a "layout" (regions/zones) to get started, whereas MinIO doesn't force this concept on you for simple setups. Design Philosophy: While Garage is fantastic for edge/geo-distributed use cases, I feel its overall design still lags behind MinIO and RustFS. There is a higher barrier to entry because you have to learn specific Garage concepts just to get it running.

  • Regarding aistore the recommended prod configuration is kubernetes, which brings in a huge amount of complexity. Also, one person (Alex Aizman) has about half of the total commits in the project, so it seems like the bus factor is 1.

    I could see running Aistore in single binary mode for small deployments, but for anything large and production grade I would not touch Aistore. Ceph is going to be the better option IMO, it is a truly collaborative open source project developed by multiple companies with a long track record.

> RustFS and SeaweedFS are the fastest in the object storage field.

I'm not sure if SeaweedFS is comparable. It's based on Facebook's Haystack design, which is used to address a very specific use case: minimizing the IOs, in particular the metadata lookup, for accessing individual objects. This leads to many trade-offs. For instance, its main unit of operations is on volumes. Data is appended to a volume. Erasure coding is done per volume. Updates are done at volume level, and etc.

On the other hand, a general object store goes beyond needle-in-a-haystack type of operations. In particular, people use an object store as the backend for analytics, which requires high-throughput scans.

> 4. Ceph [...]

MinIO was more for the "mini" use case (or more like "anything not large scale", with a very broad definition of large scale). Here "works out of the box" is paramount.

And Ceph is more for the maxi use case. Here in depth fine tuning, highly complex setups, distributed setups and similar are the norm. Hence out of the box small scale setup experience is bearly relevant.

So they really don't fill out the same space, even through their functionality overlaps.

I want to like RustFS, but it feels like there's so much marketing attached to the software it turns me off a little. Even a little rocket emoji and benchmark in the Github about page. Sometimes less is more. Look at the ty Github home page - 1 benchmark on the main page, the description is just "An extremely fast Python type checker and language server, written in Rust.".

  • Haha, +1. I really like RustFS as a product, but the marketing fluff and documentation put me off too. It reads like non-native speakers relying heavily on AI, which explains a lot. Honestly, they really need to bring in some native English speakers to overhaul the docs. The current vibe just doesn't land well with a US audience.

> too many people globally who use open source without being willing to pay for it.

That's an odd take... open source is a software licensing model, not a business model.

Unless you have some knowledge that I don't, MinIO never asked for nor accepted donations from users of their open source offerings. All of their funding came from sales and support of their enterprise products, not their open source one. They are shutting down their own contributions to the open source code in order to focus on their closed enterprise products, not due to lack of community engagement or (as already mentioned) community funding.

  • > That's an odd take... open source is a software licensing model, not a business model.

    Yes, open-source is a software license model, not a business model. It is also not a software support model.

    This change is them essentially declaring that MinIO is EOL and will not have any further updates.

    For comparison, Windows 10 which is a paid software released in the same year as first minio release i.e. 2015 is already EOL.

  • I respectfully disagree with the notion that open source is strictly a licensing model and not a business model. For an open-source project to achieve long-term reliability and growth, it must be backed by a sustainable commercial engine. History has shown that simply donating a project to a foundation (like Apache or CNCF) isn't a silver bullet; many projects under those umbrellas still struggle to find the resources they need to thrive. The ideal path—and the best outcome for users globally—is a "middle way" where: The software remains open and maintained. The core team has a viable way to survive and fund development. Open code ensures security, transparency, and a trustworthy software supply chain. However, the way MinIO has handled this transition is, in my view, the most disappointing approach possible. It creates a significant trust gap. When a company pivots this way, users are left wondering about the integrity of the code—whether it’s the potential for "backdoors" or undisclosed data transmission. I hope to see other open-source object storage projects mature quickly to provide a truly transparent and reliable alternative.

    • > For an open-source project to achieve long-term reliability and growth, it must be backed by a sustainable commercial engine

      You mean like Linux, Python, PostgreSQL, Apache HTTP Server, Node.js, MariaDB, GNU Bash, GNU Coreutils, SQLite, VLC, LibreOffice, OpenSSH?

      4 replies →

> Although many people criticize RustFS, suggesting its CLA might be "bait," I don't think such a requirement is excessive for open source software, as it helps mitigate their own legal risks.

What legal risks does it help mitigate?

  • RustFS has rug-pull written all over it. You can bookmark this comment for the future. 100% guaranteed it will happen. Only question is when.

    • I’m Elvin from the RustFS team in the U.S. Thanks for pointing out the issues with our initial CLA. We realized the original wording was overreaching and created a lot of distrust about the project's future.

      We’ve officially updated the CLA to a standard License Grant model. Under these new terms, you retain full ownership of your contributions, and only grant us a non-exclusive license to use them. You can check the updated CLA here: https://github.com/rustfs/rustfs/blob/main/CLA.md.

      More importantly, the RustFS team is officially pledging to keep our core repository permanently open-source. We are committed to an open-core engine for the long term, not a "bait and switch."

      1 reply →

    • Lol, maybe you should fund the RustFS team yourself or sponsor a top-tier legal team for them. If you can help them rewrite their CLAs and guarantee they'll never face any IP risks down the road, then sure, you're 100% right.

      4 replies →

I run Ceph in my k8s cluster (using rook) -- 4 nodes, 2x 4TB enterprise SSDs on each node. It's been pretty bulletproof; took some time to set up and familiarize with Ceph but now it's simple to operate.

Claude Code is amazing at managing Ceph, restoring, fixing CRUSH maps, etc. It's got all the Ceph motions down to a tee.

With the tools at our disposal nowadays, saying "I wouldn't dare deploy it without a deep understanding of the source code" seems like an overexaggeration!

I encourage folks to try out Ceph if it supports their usecase.

  • Considering the hallucinations I routinely deal with about databases, there isn’t a chance in hell I would trust an LLM to manage my storage for me.

  • If you setup ceph correctly (multiple failure domains, correct replication rules across failure domains, monitors spread across failure domain, osds are not force purged) it is actually pretty hard to break it. Rook helps a lot too as rook makes it easier to set up ceph correctly.

It looks like this article is biased. It only benchmarked RustFS.

In my experience, SeaweedFS has at least 3–5× better performance than MinIO. I used MinIO to host 100 TB of images to serve millions of users daily.

Gosh, Ceph what a pita. Never again LOL. I wouldn't even want an LLM to suffer working on it.

  • Haha, totally get you! I think if you forced an LLM to manage a large-scale Ceph cluster, it would probably start hallucinating about retirement.