← Back to context

Comment by BobbyTables2

2 months ago

May sound like a conspiracy theory, but NVIDIA and a whole lot of AI startups have a strong vested interest to not seek+publish such findings.

If I don’t need a huge model and GPU, then AI is little more than an open source program running on an idle PC.

I feel like AI was NVIDIA’s lifeboat as GPU mining waned. Don’t see anything after that in the near future.

I think NVIDIAs future is pretty bright.

We're getting to the run-your-capable-LLM on-prem or at-home territory.

Without DeepSeek (and hopefully its successors) I wouldn't really have a usecase for something like NVIDIAs Project Digits.

https://www.nvidia.com/en-us/project-digits/

  • Except I can run R1 1.5b on a GPU-less and NPU-less Intel NUC from four-five years ago using half its cores and the reply speed is…functional.

    As the models have gotten more efficient and distillation better the minimum viable hardware for really cooking with LLMs has gone from a 4090 to suddenly something a lot of people already probably own.

    I definitely think a Digits box would be nice, but honestly I’m not sure I’ll need one.

    • Yeah but what was R1 trained with? 50k GPUs as far as I've heard as well as distillation from OpenAI's models (basically leaning on their GPUs/GPU time).

      Besides the fact that consumers will still always want GPUs for gaming, rendering, science compute etc.

      No, I don't have any Nvidia stocks.