Comment by vessenes
7 months ago
I tried Kimi on a few coding problems that Claude was spinning on. It’s good. It’s huge, way too big to be a “local” model — I think you need something like 16 H200s to run it - but it has a slightly different vibe than some of the other models. I liked it. It would definitely be useful in ensemble use cases at the very least.
Reasonable speeds are possible with 4bit quants on 2 512GB Mac Studios (MLX TB4 Ring - see https://x.com/awnihannun/status/1943723599971443134) or even a single socket Epyc system with >1TB of RAM (about the same real world memory throughput as the M Ultra). So $20k-ish to play with it.
For real-world speeds though yeah, you'd need serious hardware. This is more of a "deploy your own stamp" model, less a "local" model.
Reasonable speeds are possible if you pay someone else to run it. Right now both NovitaAI and Parasail are running it, both available through Openrouter and both promising not to store any data. I'm sure the other big model hosters will follow if there's demand.
I may not be able to reasonably run it myself, but at least I can choose who I trust to run it and can have inference pricing determined by a competitive market. According to their benchmarks the model is about in a class with Claude 4 Sonet, yet already costs less than one third of Sonet's inference pricing
I’m actually finding Claude 4 Sonnet’s thinking model to be too slow to meet my needs. It literally takes several minutes per query on Cursor.
So running it locally is the exact opposite of what I’m looking for.
Rather, I’m willing to pay more, to have it be run on a faster than normal cloud inference machine.
Anthropic is already too slow.
Since this model is open source, maybe someone could offer it at a “premium” pay per use price, where the response rate / inference is done a lot faster, with more resources thrown at it.
2 replies →
I write a local LLM client, but sometimes, I hate that local models have enough knobs to turn that people can advocate they're reasonable in any scenario - in yesterday's post re: Kimi k2, multiple people spoke up that you can "just" stream the active expert weights out of 64 GB of RAM, and use the lowest GGUF quant, and then you get something that rounds to 1 token/s, and that is reasonable for use.
Good on you for not exaggerating.
I am very curious what exactly they see in that, 2-3 people hopped in to handwave that you just have it do agent stuff overnight and it's well worth it. I can't even begin to imagine unless you have a metric **-ton of easily solved problems that aren't coding. Even a 90% success rate gets you into "useless" territory quick when one step depends on the other, and you're running it autonomoously for hours
I do deepseek at 5tk/sec at home and I'm happy with it. I don't need to do agent stuff to gain from it, I was saving to eventually build out enough to run it at 10tk/sec, but with kimi k2, plan has changed and the savings continue with a goal to run it at 5 tk/sec at home.
7 replies →
> or even a single socket Epyc system with >1TB of RAM
How many tokens/second would this likely achieve?
KTransformers now supports Kimi K2 for MoE offloading
They claim 14 tps for the 4-bit quant on a single socket system with 600 GB RAM and 14 GB GPU memory.
around 1 by the time you try to do anything useful with it (>10000 tokens)
1
This is fairly affordable if you’re a business honestly
looks very much usable for local usage.
I tried it a couple of times in comparison to Claude. Kimi wrote much simpler and more readable code than Claude's over-engineered solutions. It missed a few minor subtle edge cases that Claude took care of though.
Claude what? Sonnet? 3.7? 3.5? Opus? 4?
The first question I gave it (a sort of pretty simple recreational math question I asked it to code up for me) and it was outrageously wrong. In fairness, and to my surprise, OpenAI's model also failed with this task, although with some prompting, sort of got it.
Still pretty good, someone with enough resources could distil it down to a more manageable size for the rest of us.
I asked it to give me its opinion on a mail I'm writing. 95% of its content is quotes from famous authors, and the 5% I wrote is actually minimal glue in-between.
All the models I tested, which includes Sonnet 4, DeepSeekR1, 4o and Gemini 2.5 understand this isn't your normal email and what I ask is literary/philosophical criticism, not remarks about conventions, formatting or how to convey my message in a more impactful way.
Yes, this quote is by Baudrillard. None of the other models fixated on the fact that it’s an email (I only used the word once in the prompt). My gut feeling is that this reflects not so much a lack of intelligence as a difference in model personality. Here's what it replied when I shared Gemini's analysis. The point was to have it understand that:
Point deflected as soon as understood:
The solution to sycophancy is not disparagement (misplaced criticism). The classical true/false positive/negative dilemma is at play here. I guess the bot got caught in the crossfire of 1°) its no-bullshit attitude (it can only be an attitude) 2°) preference for delivering blunt criticism over insincere flattery 3°) being a helpful assistant. Remove point 3°), and it could have replied: "I'm not engaging in this nonsense". Preserve it and it will politely suggest that you condense your bullshit text, because shorter explanations are better than long winding rants (it's probably in the prompt).