← Back to context

Comment by cr125rider

14 hours ago

This is satire, right?

No, AI capabilities of some sort are obviously important. But I know a lot of people don't appreciate that.

But you aren't seriously suggesting that graphics hardware is irrelevant are you?

  • Vast majority of folks are using from the cloud today. If you want to spend big on local ML there are other options. Maybe a future starfighter with panther lake and unified mem is in the cards, but not today.

  • The few things that make me agree with GP:

    1. "AI" is a marketing term used by the likes of OpenAI/Anthropic/Google. LocalLLaMa communities prefer to use "LLM" or "model". So for a lot of people "AI" is just a service (see 4.)

    2. "AI capability" is an irrelevant spec and marketing slug. The hardware specs will give you the needed infomation to consider a model[0][1].

    3. If you'll want to run a model locally, you'd know that a midrange notebook isn't the device to look for. Instead, look at workstations with discrete graphic cards + lots of VRAM (24GB+), Strix Halo APUs or a MacBook with lots of RAM, or some dedicated workstations like the NVIDIA DGX Spark[2].

    4. An inference engine can run anywhere, you can pick any LLM hosting service. LLM clients just expect an API endpoint anyway.

    [0]: https://www.canirun.ai/

    [1]: https://www.caniusellm.com/

    [2]: https://www.nvidia.com/en-us/products/workstations/dgx-spark...

Off topic but I like your username! Ironically I have matching 2003 CR85 and CR250's but not the 125 :P