← Back to context

Comment by 01100011

16 hours ago

Yeah, GPUdirect should allow you to dma straight to a storage device.

I wonder... what if the m.2 storage was actually DRAM? You probably don't need persistence for spilling a model off the GPU. How would it fare vs just adding more host memory? The m.2 ram would be less flexible, but would keep the system ram free for the CPU.

Yeah a ramdisk would probably work wonders. It's a shame Intel optane didn't became a standard, those type of workflows would be amazing for it.

  • Ya know, here on the local market there are a bunch of optanes hanging around, I'll try to manage one to check if there's any improvement

    • Optanes will be good for latency, but not so much for BW which seems to be your major bottleneck if I'm not mistaken?

This is exactly what I was wondering

I gave a talk a few years ago at dask summit (conf?) on making the stars align with dask-cudf here. We were helping a customer accelerate log analytics by proving out our stack for nodes that look roughly like: parallel ssd storage arrays (30 x 3 GB/s?) -> GPUDirect Storage -> 4 x 30 GB/s PCIe (?) -> 8 x A100 GPUs, something like that. It'd be cool to see the same thing now in the LLM world, such as a multi-GPU MoE, or even a single-GPU one for that matter!

Isn't m.2 storage but DRAM - hopefully, meaning NVMe/PCIe not SATA speed - already exists as Compute Express Link (CXL), just not in this specific m.2 form factor? If only RAM wasn't silly expensive right now, one could use 31GB/s of additional bandwidth per NVMe connector.