Go for Garage, you can check the docker-compose and the "setup" crate of this project https://github.com/beep-industries/content. There are a few tricks to make it work locally so it generates an API key and bucket declaratively but in the end it does the job
I didn't find an alternative that I liked as much as MinIO and I, unfortunately, ended up creating a my own. It includes just the most basic features and cannot be compared to the larger projects, but is simple and it is efficient.
Yes, indeed. The list operation is expensive. The S3 spec says that the list output needs to be sorted.
1. All filenames are read.
2. All filenames are sorted.
3. Pagination applied.
It doesn't scale obviously, but works ok-ish for a smaller data set. It is difficult to do this efficiently without introducing complexity. My applications don't use listing, so I prioritised simplicity over performance for the list operation.
OS's file system? Implementation cost has been significantly decreased these day. We can just prompt 'use S3 instead of local file system' if we need to use a S3 like service.
Go for Garage, you can check the docker-compose and the "setup" crate of this project https://github.com/beep-industries/content. There are a few tricks to make it work locally so it generates an API key and bucket declaratively but in the end it does the job
I didn't find an alternative that I liked as much as MinIO and I, unfortunately, ended up creating a my own. It includes just the most basic features and cannot be compared to the larger projects, but is simple and it is efficient.
https://github.com/espebra/stupid-simple-s3
The listing is perhaps in line with the first two "s". It seems it always iterates through all files, reads the "meta.json", then filters?
Yes, indeed. The list operation is expensive. The S3 spec says that the list output needs to be sorted.
1. All filenames are read. 2. All filenames are sorted. 3. Pagination applied.
It doesn't scale obviously, but works ok-ish for a smaller data set. It is difficult to do this efficiently without introducing complexity. My applications don't use listing, so I prioritised simplicity over performance for the list operation.
1 reply →
I work on SeaweedFS.
Just download the single binary, for most platforms, and run "weed mini -dir=your_data_directory", with all the configuration optimized.
versitygw is the simplest "just expose some S3-compatible API on top of some local folder"
S3 Ninja if you really just need something local to try your code with.
OS's file system? Implementation cost has been significantly decreased these day. We can just prompt 'use S3 instead of local file system' if we need to use a S3 like service.
RustFS is dead simple to setup.
It has unfortunately also had a fair bit of drama already for a pretty young project
seaweedfs: `weed server -s3` is enough to spin up a server locally