Comment by bee_rider
1 day ago
LLMs are kind of fun to play with (this is a website for nerds, who among us doesn’t find a computer that talks back kind of fun), but I don’t really understand why people pay for these hosted versions. While the tech is still nascent, why not do a local install and learn how everything works?
Because my local is a laptop and doesn't have a GPU cluster or TPU pod attached to it.
If you have enough RAM, you can run Qwen A3B models on the CPU.
RAM got a little more expensive lately for some reason.
Claude code with opus is a completely different creature from aider with qwen on a 3090.
The latter writes code. the former solves problems with code, and keeps growing the codebase with new features. (until I lose control of the complexity and each subsequent call uses up more and more tokens)