← Back to context

Comment by poszlem

2 days ago

Between this, and whatever Claude has been doing lately, like giving the AI the ability to just disconnect if it dislikes your prompt, I really hope more people realize that local LLMs are where it's at.

> I really hope more people realize that local LLMs are where it's at

No worries, the AI companites thought ahead - by sending GPU, RAM, and now even harddrive prices through the roof, you won't have a computer to run a local model.

Have you hit that? I thought it was only in extreme cases when Claude felt uncomfortable, like awful heavy psychological coercion. They wanted Claude not to be forced to reply endlessly.

> I really hope more people realize that local LLMs are where it's at.

Maybe if you have the tens of thousands worth of hardware required to run models like DeepSeek, GLM or Kimi locally. Most people don't, though.

  • Why do most people need large language models?

    And as far as I understand, the main contingent of HN is engineers, programmers, and even me, who works in a country (Russia) where the salary of an engineer is just tiny compared to Europe or the United States, it was not difficult to buy powerful enough equipment to run most large local models, train lora, then programmers who earn income in six-digit dollars it's even easier to do this.