Comment by bayindirh
1 day ago
> You aren't doing that with ZFS or btrfs, though.
ZFS can, and is actually designed to, handle that kind of workloads, though. At full configuration, ZFS7420 is a 84U configuration. Every disk box has its own set of "log" SSDs and 10 additional HDDs. Plus it was one of the rare systems which supported Infiniband access natively, and was able to saturate all of its Infiniband links under immense load.
Lustre's performance is not RAM bound when driving that kind of loads, this is why MDT arrays are smaller and generally full-flash while OSTs can be selected from a mix of technologies. As I said, when driving that number of clients from a relatively small number of servers, it's not possible to keep all the metadata and query it from the RAM. Yes, Lustre recommends high RAM and core count for servers driving OSTs, but it's for file content throughput when many clients are requesting files, and we're discussing file metadata access primarily.
Again I think we're talking past each other. I'm saying "traditional filesystem-based storage management is not performance-limited at scale where everything is in RAM, so I don't see value to optimizations like that". You seem to be taking as a prior that at scale everything doesn't fit in RAM, so traditional filesystem-based storage management is still needed.
But... everything does fit in RAM at scale. I mean, Cloudflare basically runs a billion dollar business who's product is essentially "We store the internet in RAM in every city". The whole tech world is aflutter right now over a technology base that amounts to "We put the whole of human experience into GPU RAM so we can train our new overlords". It's RAM. Everything is RAM.
I'm not saying there is "no" home for excessively tuned genius-tier filesystem-over-persistent-storage code. I'm just saying that it's not a very big home, that the market has mostly passed the technology over, and that frankly patches like the linked article seem like a waste of effort to me vs. going to Amazon and buying more RAM.
Cloudflare's cache is a tiered cache with RAM and SSDs, not just RAM.
source: https://blog.cloudflare.com/why-we-started-putting-unpopular...
> Our storage layer, which serves millions of cache hits per second globally, is powered by high IOPS NVMe SSDs.
These patches came from oracle. Pretty sure they have a client somewhere needs this.
No, it doesn't. You think in a very static manner. Yes, you can fit websites in RAM, but you can't fit the databases powering them. Yes, you can fit some part of the videos or images you're working on or serving on RAM, but you can't store whole catalogs in RAM.
Moreover, you again give examples from the end product. Finished sites, compacted JS files, compressed videos, compiled models...
There's much more than that. The model is in RAM, but you need to rake tons of data over that GPU. Sometimes terabytes of data. You have raw images to process, raw video to color-grade, unfiltered scientific data to sift through. These files are huge.
A well processed JPG from my camera is around 5MB, but RAW version I process is 25MB per frame, and it's a 24MP image, puny for today's standards. Your run of the mill 2K video takes a couple of GBs after final render at movie length. RAWs take 10s of terabytes, at minimum. Unfiltered scientific data again comes in terabytes to petabytes range depending on your project and instruments you work on, and multiple such groups pull their own big datasets to process real-time.
In my world, nothing fits in RAM except the runtime data, and that's your application plus some intermediate data structures. The rest is read from small to gigantic files and written in files of unknown sizes, by multiple groups, simultaneously. These systems experience the real meaning of "saturation", and they would really swear at us at some cases.
Sorry, but you can't solve this problem by buying more RAM, because these workloads can't be carried to clouds. They need to be local, transparent and fast. IOW, you need disk systems which feel like RAM. Again, look what Weka (https://www.weka.io/) does. It's one of the most visible companies which make systems behave like a huge RAM, but with multiple machines and tons of cutting edge SSDs, because what they process doesn't fit in RAM.
Lastly, oh, there's a law which I forget its name every time, which tells you if you cache 10 most used files, you can serve up to 90% of your requests from that cache, if your request pattern is static. In cases I cite, there's no "popular" file. Everybody wants their own popular files which makes access "truly random".