← Back to context

Comment by ajross

21 hours ago

Just buy more RAM and you get that for free. Really I guess that's my sense of patches like this in general: while sure, filesystem research has a long and storied history and it's a very hard problem in general that attracts some of the smartest people in the field to do genius-tier work...

Does it really matter in the modern world where a vanilla two-socket rack unit has a terabyte of DRAM? Everything at scale happens in RAM these days. Everything. Replicating across datacenters gets you all the reliability you need, with none of the fussing about storage latency and block device I/O strategy.

Actually, it doesn't work like that.

Sun's ZFS7420 had a terabyte of RAM per controller, and these work in tandem, and after a certain pressure, the thing can't keep up even though it also uses specialized SSDs to reduce HDD array access during requests, and these were blazingly fast boxes for their time.

When you drive a couple thousand physical nodes with a some-petabytes sized volumes, no amount of RAM can save you. This is why Lustre divides metadata servers and volumes from file ones. You can keep very small files in metadata area (a-la Apple's 0-sized, data-in-resource-fork implementation), but for bigger data, you need to have good filesystems. There are no workarounds from this.

If you want to go faster, take a look at Weka and GPUDirect. Again, when you are pumping tons of data to your GPUs to keep them training/inferring, no amount of RAM can keep that data (or sustain the throughput) during that chaotic access for you.

When we talked about performance, we used to say GB/sec. Now a single SSD provides that IOPS and throughput provided by storage clusters. Instead, we talk about TB/sec in some cases. You can casually connect terabit Ethernet (or Infiniband if you prefer that) to a server with a couple of cables.

  • > When you drive a couple thousand physical nodes with a some-petabytes sized volumes

    You aren't doing that with ZFS or btrfs, though. Datacenter-scale storage solutions (c.f. Lustre, which you mention) have long since abandoned traditional filesystem techniques like the one in the linked article. And they rely almost exclusively on RAM behavior for their performance characteristics, not the underlying storage (which usually ends up being something analogous to a pickled transaction log, it's not the format you're expected to manage per-operation)

    • > You aren't doing that with ZFS or btrfs, though.

      ZFS can, and is actually designed to, handle that kind of workloads, though. At full configuration, ZFS7420 is a 84U configuration. Every disk box has its own set of "log" SSDs and 10 additional HDDs. Plus it was one of the rare systems which supported Infiniband access natively, and was able to saturate all of its Infiniband links under immense load.

      Lustre's performance is not RAM bound when driving that kind of loads, this is why MDT arrays are smaller and generally full-flash while OSTs can be selected from a mix of technologies. As I said, when driving that number of clients from a relatively small number of servers, it's not possible to keep all the metadata and query it from the RAM. Yes, Lustre recommends high RAM and core count for servers driving OSTs, but it's for file content throughput when many clients are requesting files, and we're discussing file metadata access primarily.

      4 replies →

    • Besides ZFS (and I've heard of exabyte sized ZFS filesystems), bcachefs is absolutely capable of petabyte sized filesystems.

Some time ago (back when we were using spinning rust) I was wondering whether one could bypass the latency of disk access when replicating to multiple hosts. I mean, how likely is it, that two hosts crash at the same time? Well, it turns out that there are some causes which take out multiple hosts simultaneously (a way too common occurrence seems to be diesel generators which fail to start after power failure). I think the good fellas at Amazon, Meta and Google even have stories to tell about a whole data center failing. So you need replication across data centers, but then network latency bites ya. Current NVMe storage devices are then faster (and for some access patterns nearly as fast as RAM).

And that's just at the largest scale. I'm pretty sure banks still insist that the data is written to (multiple) disks (aka "stable storage") before completing a transaction.

> Does it really matter in the modern world

Considering that multiple ZFS developers get paid to make ZFS work well on petabyte-sized disk arrays with SSD caching, and one of them often reports on progress in this area in his podcasts (2.5admins.com and bsdnow if you're interested) .. then yes?