Show HN: Llama 3.1 70B on a single RTX 3090 via NVMe-to-GPU bypassing the CPU

11 hours ago (github.com)

Hi everyone, I'm kinda involved in some retrogaming and with some experiments I ran into the following question: "It would be possible to run transformer models bypassing the cpu/ram, connecting the gpu to the nvme?"

This is the result of that question itself and some weekend vibecoding (it has the linked library repository in the readme as well), it seems to work, even on consumer gpus, it should work better on professional ones tho

This is a really impressive piece of systems engineering. The 3-tier adaptive caching (VRAM resident > pinned RAM > NVMe/mmap) is essentially reimplementing what the Linux kernel's page cache does, but with GPU-awareness baked in.

The 0.3 tok/s for 70B Q4_K_M on a single 3090 is slow for interactive use, but the architecture itself is what matters here. PCIe Gen3 x8 at ~6.5 GB/s is the clear bottleneck - I'd be very curious to see numbers on a Gen5 NVMe setup where sequential reads can hit 12+ GB/s. That alone could potentially double throughput.

The layer skip via cosine similarity calibration (20/80 layers skipped) is a clever trick. Reminds me of early work on adaptive computation in transformers. The quality tradeoff at threshold 0.98 would be interesting to benchmark more rigorously - for many inference tasks like summarization or classification, you could probably push that much further.

Also worth noting: zero external dependencies beyond CUDA Toolkit is a bold design choice. No cuBLAS means they wrote their own GEMM kernels, which is a massive undertaking but gives full control over the memory access patterns needed for this streaming architecture.

  • > No cuBLAS means they wrote their own GEMM kernels, which is a massive undertaking

    Not to diminish the impressiveness of this overall project, but it says right up front that these were vibe coded and the Opus 4.6 co-author lines are right in the commit messages. Those pieces were adapted from existing work via LLM, which is exactly the right use in a proof of concept project like this.

0.2 tok/s is slow for chat but perfectly fine for batch/async workloads. I run automated content generation pipelines where a single job kicks off dozens of LLM calls (script generation, metadata, descriptions) and none of them need to be interactive. The whole job takes 20 minutes anyway because of image generation bottlenecks. Being able to run a 70B model locally for those batch calls instead of paying per-token API costs would be a significant cost reduction, even at this speed.

  • Cost wise it does not seem very effective. .5 token / sec (the optimized one) is 3600 tokens an hour, which costs about 200-300 watts for an active 3090+system. Running 3600 tokens on open router @.4$ for llama 3.1 (3.3 costs less), is about $0,00144. That money buys you about 2-3 watts (in the Netherlands).

    Great achievement for privacy inference nonetheless.

    • I think we use different units. In my system there are 3600 seconds per hour, and watts measure power.

    • Something to consider is that input tokens have a cost too. They are typically processed much faster than output tokens. If you have long conversations then input tokens will end up being a significant part of the cost.

      It probably won't matter much here though.

0.2 tok/s is fine for experimentation, but it is not interactive in any meaningful sense. For many use cases, a well-quantized 8B or 13B that stays resident will simply deliver a better latency-quality tradeoff

  • yeah, actually I wanted to see if this was possible at all. I managed to get around 3000 tokens/s on a ps2 with classic transformers, since the emotion engine is capable of 32 bit addresses, but it has like 32gb of ram. So I ran into the question of why was that fast and I couldn't get that speed even with small models, and the deal is that the instructions went right of the memory to the gpu and that's the main difference that does when a regular computer does inference: it has to request the instructions to the cpu every time. As I mentioned too, on professional cards you can avoid these problems naturally, since they got instructions precisely for this, but sadly I don't have 30k bucks to spare on a gpu :(

  • I can imagine a couple scenarios in which a high-quality, large model would be much preferred over lower latency models, primarily when you need the quality.

  • That's slower than just running it off CPU+GPU. I can easily hit 1.5 tokens/s on a 7950X+3090 and a 20480-token context.

  • I didn't really understand the performance table until I saw the top ones were 8B models.

    But 5 seconds / token is quite slow yeah. I guess this is for low ram machines? I'm pretty sure my 5950x with 128 gb ram can run this faster on the CPU with some layers / prefill on the 3060 gpu I have.

    I also see that they claim the process is compute bound at 2 seconds/token, but that doesn't seem correct with a 3090?

    • LLM speed is roughly <memory_bandwidth> / <model_size> tok/s.

      DDR4 tops out about 27Gbs

      DDR5 can do around 40Gbs

      So for 70B model at 8 bit quant, you will get around 0.3-0.5 tokens per second using RAM alone.

      6 replies →

Yeah, GPUdirect should allow you to dma straight to a storage device.

I wonder... what if the m.2 storage was actually DRAM? You probably don't need persistence for spilling a model off the GPU. How would it fare vs just adding more host memory? The m.2 ram would be less flexible, but would keep the system ram free for the CPU.

  • This is exactly what I was wondering

    I gave a talk a few years ago at dask summit (conf?) where we were playing with parallel ssd storage arrays (30 x 3 GB/s?) -> GPUDirect Storage -> 4 x 30 GB/s PCIe (?) -> 8 x A100 GPUs, something like that . It'd be cool to see the same thing for a multi-GPU MoE, or even a single-GPU one for that matter.

  • Isn't m.2 storage but DRAM - hopefully, meaning NVMe/PCIe not SATA speed - already exists as Compute Express Link (CXL), just not in this specific m.2 form factor? If only RAM wasn't silly expensive right now, one could use 31GB/s of additional bandwidth per NVMe connector.

Really cool. I'm wondering: what background did you need to be able to think of the question that resulted in this project?

I know you said you're involved in some retrogaming and were experimenting, but as someone who works in a world where hardware is pretty heavily abstracted away, even if I got into retrogaming I don't know that I'd consider that there may be a systems improvement lying around. Beyond the creative aspect, it feels like there is some systems and hardware background that helped put the idea together (and I'd be interested to go learn about of that systems/hardware knowledge myself).

  • I wonder too, DMA plays a huge role in most older gaming consoles when the CPUs were far more sluggish.

    Perhaps that's what made them think to try.

    Perhaps the current batch of smart memory cards which on the PS2 I believe have quite complex DMA capabilities to stream from the SD card game data.

    • Why not the PS5? That's when games started streaming assets straight from the NVME SSD to the GPU. In this case the assets are weights.

Cool project. Can you provide more details about your DKMS patching process for consumer GPUs? This would be fun to try out, but I’d need some more details on that patch process first.

This is an interesting area for experiments. I suspect that in the longer term model optimization (knowing which bits you can leave out without affecting the functioning of the model) will become the dominant area of research just like it did with compression algorithms because effectively a model is a lossy compression scheme.

And that's good because that increases democratization of AI away from the silos that are being created.

  • The compression analogy is interesting. Another way of looking at it could be fine-tuning as "knowing what to leave out" - a 3B model for example tuned for a narrow task doesn't need the capacity that makes 70B good at many things.

Nice. I've been looking at doing something similar, more on the order of running a 1T model with less than half the available VRAM.

One workup indicated it was theoretically possible to modify a piece of SGLang's routing layer to support JIT predict-ahead expert swaps from Gen5 NVMe storage straight into GPU memory.

I'm hoping that proves true. The setup relies on NVIDIA Dynamo, so NIXL primitives are available to support that.

Curious if anyone's tried this already.

  • That would be nice to see. Actually I was thinking about getting another 3090 and a mobo upgrade since I'm bottlenecked by pcie3 to tryna run glm 4.7 or 5 at q4_k_m, it should be possible.

Umm sorry but the cpu can easily keep up shuttling around to/from your nvme. Especially ancient gen3 pcie. Not sure why ud do this.

I wonder - could this be used for multi-tier MoE? Eg. active + most used in VRAM, often used in RAM and less used in NVMe?

  • Yeah I’ve often wondered why folks aren’t training two tier MoEs for VRAM + RAM. We already have designs for shared experts so it cannot be hard to implement a router that allocated 10x or 100x as often to “core” experts vs the “nice to have” experts. I suppose balancing during training is tricky but some sort of custom loss on the router layers should work.

    I’ve also wondered why the routers aren’t training to be serially consistent so you can predict layers to swap into VRAM a few layers ahead to maximize available bandwidth.

    • I think part of the issue is that in production deployments, you're batching high enough that you'll be paging in those long tail experts constantly.

      Unless you're handing that in some kind of fancy way, you'll be holding up the batch while waiting for host memory which will kill your throughout.

      It makes much more sense for non batched local inference, especially if you can keep the MoE routing stable like you say, but most folks aren't optimising for that.

      2 replies →

    • Maybe I am misunderstanding something but:

      1) This is basically the intention of several recent MoE models: keep particular generally useful experts hot in VRAM.

      2) Unless you can swap layers in faster than you consume them there is no point to predicting layers (what does this even really mean? did you mean predicting experts?).

      It seems at the moment the best you can do is keep experts and layers more likely to be used for a given query in VRAM and offload the rest, but this is work-dependent.

Didn't DirectX add an API for loading assets directly to GPU memory? Would that work?

I feel like we need an entirely new type of silicon for LLMs. Something completely focused on bandwidth and storage probably at the sacrifice of raw computation power.

Could be neat to see what giving the 8b like 6gb ram instead of 10gb. Something in-between, where you still need NVMe, but not like the 3x ratio of the 70b model on 23GB.

Nice work. PCI-P2P (GPU-Direct (tm)) is such great stuff. Cool to see!