← Back to context

Comment by giancarlostoro

18 hours ago

Yeah, I've heard of people swapping out the model that Claude Code calls and apparently its not THAT much of a difference. What I'd love to see from Anthropic instead is, give me smaller LLM models, I don't even care if they're "open source" or not, just pull down a model that takes maybe 4 or 6 GB of VRAM into my local box, and use those for your coding agents, you can direct it and guide it with Opus anyway, so why not cut down on costs for everyone (consumer and Anthropic themselves!) by just letting users who can run some of the compute locally. I've got about 16GB of VRAM I can juice out of my Macbook Pro, I'm okay running a few smaller models locally with the guiding hand of Opus or Sonnet for less compute on the API front.

So, like, why don’t people just use the better-than-Claude OpenCode CLI with these other just-as-good-as-Claude models?

Anthropic might have good models, but they are the worse. I mentioned in another thread how they do whatever they can to bypass bot detection protection to scrap content.

not sure there are any models yet that you can get the quality out you need to do this and run on your mbp