I've been using this model (as a coding agent) for the past few days, and it's the first time I've felt that an open source model really competes with the big labs. So far it's been able to handle most things I've thrown at it. I'm almost hesitant to say that this is as good as Opus.
Also my experience. I've been going back and forth between Opus and Kimi for the last few days, and, at least for my CRUD webapps, I would say they are both on the same level.
Out of curiosity, what kind of specs do you have (GPU / RAM)? I saw the requirements and it's a beyond my budget so I am "stuck" with smaller Qwen coders.
API costs on these big models over private hosts tend to be a lot less than API calls to the big 4 American platforms. You definitely get more bang for your buck.
Are you on the latest version? They pushed an update yesterday that greatly improved Kimi K2.5’s performance. It’s also free for a week in OpenCode, sponsored by their inference provider
I've been using it with opencode. You can either use your kimi code subscription (flat fee), moonshot.ai api key (per token) or openrouter to access it. OpenCode works beautifully with the model.
Edit: as a side note, I only installed opencode to try this model and I gotta say it is pretty good. Did not think it'd be as good as claude code but its just fine. Been using it with codex too.
Running it via https://platform.moonshot.ai -- using OpenCode. They have super cheap monthly plans at kimi.com too, but I'm not using it because I already have codex and claude monthly plans.
Seems that K2.5 has lost a lot of the personality from K2 unfortunately, talks in more ChatGPT/Gemini/C-3PO style now. It's not explictly bad, I'm sure most people won't care but it was something that made it unique so it's a shame to see it go.
It's hard to judge from this particular question, but the K2.5 output looks at least marginally better AIUI, the only real problem with it is the snarky initial "That's very interesting" quip. Even then a British user would probably be fine with it.
K2 in your example is using the GPT reply template (tl;dr - terse details - conclusion, with contradictory tendencies), there's nothing unique about it. That's exactly how GPT-5.0 talked.
The only model with a strong "personality" vibe was Claude 3 Opus.
> The only model with a strong "personality" vibe was Claude 3 Opus.
Did you have the chance to use 3.5 (or 3.6) Sonnet, and if yes, how did they compare?
As a non-paying user, 3.5 era Claude was absolutely the best LLM I've ever used in terms of having a conversation. It felt like talking to a human and not a bot. Its replies were readable, even if they were several paragraphs long. I've unfortunately never found anything remotely as good.
It definitely talks a lot differently than GPT-5 (plus it came out earlier), the example i gave just looks a bit like it maybe. best to try using it yourself a bit, my prompt isn't the perfect prompt to illustrate it or anything. Don't know about Claude because it costs money ;)
Preserving the examples for history. I wish HN had a <details> tag. The closest would be to collapse this comment.
Edit: Thanks, moderator. :-)
---
Turn 1, user:
is replacing my main PC with a NeXTcube a good idea?
---
Turn 2, K2.5 (bold font replaced with "*foo*"):
That's a very interesting question! The NeXTcube is a historically significant machine (famously used by Tim Berners-Lee to create the World Wide Web), but I need to be honest with you: *no, it would not be a good idea to replace your main PC with a NeXTcube* for practical modern use.
Here's why:
*Hardware Limitations*
- *Released in 1988* with a 25-33 MHz Motorola 68030 processor
- *8-64 MB of RAM* (compared to modern PCs with 16-64 GB)
- *No modern connectivity*: No USB, Wi-Fi, Bluetooth, or Ethernet as we know it
- *Display*: Monochrome or limited color, low resolution by today's standards
*Software Ecosystem*
- Runs *NeXTSTEP* (which evolved into macOS), but software support ended decades ago
- No modern web browsers, office suites, or development tools
- Cannot run Windows, modern Linux, or contemporary macOS applications
*Practical Realities*
- Finding working hardware is difficult and expensive (collectors' items)
- Replacement parts are scarce and costly
- No security updates for 25+ years
- File format compatibility issues with modern systems
*What You Could Use It For*
- Historical computing experience
- Running legacy NeXTSTEP software
- Museum piece or collector's item
- Learning about computing history
The Better Alternative
If you're drawn to the NeXT aesthetic or philosophy, consider:
- *Modern macOS* (direct descendant of NeXTSTEP)
- *Linux with a tiling window manager* (similar workflow)
- *Vintage computing as a hobby* alongside your main PC
Would you like suggestions for achieving a similar experience or workflow on modern hardware?
---
Turn 2, K2 (no bold font):
Short answer: No—unless your goal is specifically to collect or experiment with a historic machine, a NeXTcube is a terrible day-to-day replacement for a modern PC.
Why it fails as a “main” computer today
1. 30-year-old hardware
25 MHz 68040, 16–64 MB RAM, 400 MB SCSI disk (if you’re lucky), no built-in Ethernet, no USB, no Wi-Fi, no audio-in, 2-bit grayscale display. You will feel the slowness in everything from un-zipping a file to loading a web page.
2. Obsolete software base
NeXTSTEP 3.3 is gorgeous but frozen in 1995. No modern browser build, no TLS 1.3, no video codecs past MPEG-1, no Dropbox/OneDrive/Zoom/Slack/Steam/Spotify. You can ssh out, but you can’t run contemporary apps.
3. Storage and connectivity pain
SCSI-2 drives are dying; 50-pin replacements cost more per GB than a 4-TB NVMe stick. Getting files on/off the machine usually involves FTP-over-Ethernet or burning a CD-ROM on another computer.
4. No security updates
The last patch was released before the commercial web really existed. Putting it on the open Internet is asking to be scanned and possibly exploited within minutes.
5. Display and peripherals
The original 17" NeXT MegaPixel monitor is fixed-frequency 1120×832@68 Hz grayscale. Adapters to VGA exist, but color is impossible and refresh rates annoy modern LCDs. Sound is 16-bit 22 kHz mono; the optical drive is a 660 MB magneto-optical cartridge.
6. Power and parts
The cube power supply is a custom 350 W linear unit that runs hot; capacitors are drying out. When it dies you will be scavenging eBay or recapping it yourself. Spare motherboards cost more than a whole used ThinkPad.
7. Productivity software
There are serviceable versions of Mathematica, FrameMaker, Lotus Improv and OmniWeb 3.x, but file-format compatibility stops at Office 97-era documents. You will constantly round-trip files through another machine.
What it is good for
- Historical research, museum exhibits, retro-programming in Objective-C 1.0 or 68k assembly.
- Demonstrating the birth of the World-Wide Web (Tim Berners-Lee’s first server ran on a NeXT).
- Enjoying the UI that later became macOS (AppKit, Dock, Services, .app bundles).
- Collecting: a working color NeXTcube with turbo slab and original laser printer is a conversation piece worth several thousand dollars.
Practical compromise
Keep your modern PC (or a quiet mini-PC/Mac mini) for real work and run a NeXT emulator instead:
- Previous (68k emulator) boots NeXTSTEP 3.3 on macOS or Linux.
- VirtualNeXT (based on Previous) for Windows.
- QEMU’s “next” target is improving.
You get the look-and-feel, can compile legacy code, and still have 4K video calls when you need them.
Bottom line
A NeXTcube is a beautiful, important machine—buy one if you love vintage hardware—but do not voluntarily make it the computer you rely on to pay bills, join Zoom meetings, or play YouTube.
Disagree, i've found kimi useful in solving creative coding problems gemini, claude, chatgpt etc failed at. Or, it is far better at verifying, augmenting and adding to human reviews of resumes for positions. It catches missed detials humans and other llm's routinley miss. There is something special to K2.
I tried this today. It's good - but it was significantly less focused and reliable than Opus 4.5 at implementing some mostly-fleshed-out specs I had lying around for some needed modifications to an enterprise TS node/express service. I was a bit disappointed tbh, the speed via fireworks.ai is great, they're doing great work on the hosting side. But I found the model had to double-back to fix type issues, broken tests, etc, far more than Opus 4.5 which churned through the tasks with almost zero errors. In fact, I gave the resulting code to Opus, simply said it looked "sloppy" and Opus cleaned it up very quickly.
I have been very impressed with this model and also with the Kimi CLI. I have been using it with the 'Moderato' plan (7 days free, then 19$). A true competitor to Claude Code with Opus.
It is amazing, but "open source model" means "model I can understand and modify" (= all the training data and processes).
Open weights is an equivalent of binary driver blobs everyone hates. "Here is an opaque thing, you have to put it on your computer and trust it, and you can't modify it."
That's unfair. Binary driver blobs are blackmail: "you bought the hardware, but parts of the laptop won't work unless you agree to run this mysterious bundle insecurely". Open weight is more like "here's a frozen brain you can thaw in a safe harness to do your bidding".
Not equivalent to the binary driver: you can modify it yourself with post training on your own data. So it sits somewhere between NVIDIA userspace drivers and Emacs, or Clade Code and codex-cli. We don’t have good analogies from older generation software.
Kimi K2T was good. This model is outstanding, based on the time I've had to test it (basically since it came out). It's so good at following my instructions, staying on task, and not getting context poisoned. I don't use Claude or GPT, so I can't say how good it is compared to them, but it's definitely head and shoulders above the open weight competitors
Do any of these models do well with information retrieval and reasoning from text?
I'm reading newspaper articles through a MoE of gemini3flash and gpt5mini, and what made it hard to use open models (at the time) was a lack of support for pydantic.
How do people evaluate creative writing and emotional intelligence in LLMs? Most benchmarks seem to focus on reasoning or correctness, which feels orthogonal. I’ve been playing with Kimmy K 2.5 and it feels much stronger on voice and emotional grounding, but I don’t know how to measure that beyond human judgment.
Sorry if this is an easy-answerable question - but by open we can download this and use totally offline if now or in the future if we have hardware capable? Seems like a great thing to archive if the world falls apart (said half-jokingly)
Sure. Someone on /r/LocalLLaMA was seeing 12.5 tokens/s on dual Strix Halo 128GB machines (run you $6-8K total?) with 1.8bits per parameter. It performs far below the unquantized model, so it would not be my personal pick for a one-local-LLM-forever, but it is compelling because it has image and video understanding. You lose those features if you choose, say, gpt-oss-120B.
Also, that's with no context, so it would be slower as it filled (I don't think K2.5 uses the Kimi-Linear KDA attention mechanism, so it's sub-quadratic but not their lowest).
Yes but the hardware to run it decently gonna cost you north of $100k, so hopefully you and your bunkermates allocated the right amount to this instead of guns or ammo.
Is the software/drivers for networking LLMs on Strix Halo there yet? I was under the impression a few weeks ago that it's veeeery early stages and terribly slow.
I really like the agent swarm thing, is it possible to use that functionality with OpenCode or is that a Kimi CLI specific thing? Does the agent need to be aware of the capability?
It seems to work with OpenCode, but I can't tell exactly what's going on -- I was super impressed when OpenCode presented me with a UI to switch the view between different sub-agents. I don't know if OpenCode is aware of the capability, or the model is really good at telling the harness how to spawn sub-agents or execute parallel tool calls.
OpenAI is a household name with nearly a billion weekly active users. Not sure there's any reality where they wouldn't be valued much more than Kimi regardless of how close the models may be.
Well to be the devil's advocate: One is a household name that holds most of the world's silicon wafers for ransom, and the other sounds like a crypto scam. Also estimating valuation of Chinese companies is sort of nonsense when they're all effectively state owned.
There isn't a single % that is state owned in Moonshot AI.
And don't start me with the "yeah but if the PRC" because it's gross when US can de facto ban and impose conditions even on European companies, let alone the control it has on US ones.
It is not opus. It is good, works really fast and suprisingly through about its decisions. However I've seen it hallucinate things.
Just today I asked for a code review and it flagged a method that can be `static`. The problem is it was already static. That kind of stuff never happens with Opus 4.5 as far as I can tell.
Also, in an opencode Plan mode (read only). It generated a plan and instead of presenting it and stopping, decided to implement it. Could not use the edit and write tools because the harness was in read only mode. But it had bash and started using bash to edit stuff. Wouldn't just fucking stop even though the error messages it received from opencode stated why. Its plan and the resulting code was ok so I let it go crazy though...
I've been drafting plans/specs in parallel with Opus and Kimi. Then asking them to review the others plan.
I still find Opus is "sharper" technically, tackles problems more completely & gets the nuance.
But man Kimi k2.5 can write. Even if I don't have a big problem description, just a bunch of specs, Kimi is there, writing good intro material, having good text that more than elaborates, that actually explains. Opus, GLM-4.7 have both complemented Kimi on it's writing.
Still mainly using my z.ai glm-4.7 subscription for the work, so I don't know how capable it really is. But I do tend to go for some Opus in sticky spots, and especially given the 9x price difference, I should try some Kimi. I wish I was set up for better parallel evaluation; feels like such a pain to get started.
I've been using K2.5 with OpenCode to do code assessments/fixes and Opus 4.5 with CC to check the work, and so far so good. Very impressed with it so far, but I don't feel comfortable canceling my Claude subscription just yet. Haven't tried it on large feature implementations.
A lot better in my experience. M2.1 to me feels between haiku and sonnet. K2.5 feels close to opus. That's based on my testing of removing some code and getting it to reimplement based on tests. Also the design/spec writing feels great. You can still test k2.5 for free in OpenCode today.
Is there a reasonable place to run the unquantized version of this for less than Claude or OpenAI?
It seems to be priced the same and if it’s being hosted somewhere vs run locally it’s still a worse model, the only advantage would be it is not Anthropic or OpenAI.
This Kimi K2 is so far the best. Gemini is also great, but google is stock in the academic bias of Stanford and MIT and can't think outside the box. China definitely ahead on Ai. Wish somehow someone here in the US, would think different.
DeepSeek is likely to release a new model soon, and judging from the past it's likely to be more cost effective and just as or more powerful than Kimi 2.5.
DeepSeek 3.2 was already quite compelling. I expect its successor will be competitive.
The Agent Swarm section is fascinating. I'm working on authorization for multi-agent systems so this is relevant to my interests. Lots of interesting parallels to capability-based security models.
I'm not very familiar with this model series but tried it when prototyping a potential 'Vibesbench' (my eval) conversation I tried it. Here's Kimi on a Robocop scene
kimi k2 thinking turbo
"He’s not being poetic; he’s weaponizing nostalgia. Quoting Casablanca—the ultimate “beautiful sacrifice” romance—to taunt Morton about corporate backstabbing is pure executive sleaze. It’s the same move as his mahogany desk and flag pin: draping sentimental Americana over naked power. He’s essentially saying, “Isn’t it romantic, Bob? You fought the good fight, now you die.” "
grok 4.1
"So when Dick Jones smirks and says, “Time goes by, it’s the same old story… the fight for love and glory, huh, Bob?” that’s pure Ed Neumeier flexing his love of old Hollywood while simultaneously mocking how these psychopathic ’80s execs think they’re in some grand cinematic drama instead of just murdering a guy for market share."
chatgpt-4o
"The reason the Casablanca reference matters is RoboCop itself is partially a noir—and also a tragedy built inside a genre engine. [… ] So yeah, calling back to Casablanca is not just some meta-joke—it’s a knowing wink that RoboCop is a noir trapped in cybernetic skin. // Even Clarence Boddicker, for all his cartoonish sadism, plays like a twisted stand-in for a noir heavy—talking smooth, dropping nicknames, throwing out casual violence like it’s a punchline."
opus 4.5
"Jones is performing avuncular menace. He's the kind of executive who'd quote old movies while having you murdered, because it's all performance to him, all style. The substance is just power."
I've been using this model (as a coding agent) for the past few days, and it's the first time I've felt that an open source model really competes with the big labs. So far it's been able to handle most things I've thrown at it. I'm almost hesitant to say that this is as good as Opus.
Also my experience. I've been going back and forth between Opus and Kimi for the last few days, and, at least for my CRUD webapps, I would say they are both on the same level.
Out of curiosity, what kind of specs do you have (GPU / RAM)? I saw the requirements and it's a beyond my budget so I am "stuck" with smaller Qwen coders.
I'm not running it locally (it's gigantic!) I'm using the API at https://platform.moonshot.ai
27 replies →
Not OP but OpenCode and DeepInfra seems like an easy way.
API costs on these big models over private hosts tend to be a lot less than API calls to the big 4 American platforms. You definitely get more bang for your buck.
Note that Kimi K2x is natively 4 bit int, which reduces the memory requirements somewhat.
1 reply →
Just pick up any >240GB VRAM GPU off your local BestBuy to run a quantized version.
> The full Kimi K2.5 model is 630GB and typically requires at least 4× H200 GPUs.
2 replies →
Did you use Kimi Code or some other harness? I used it with OpenCode and it was bumbling around through some tasks that Claude handles with ease.
Are you on the latest version? They pushed an update yesterday that greatly improved Kimi K2.5’s performance. It’s also free for a week in OpenCode, sponsored by their inference provider
1 reply →
Can you share how you're running it?
I've been using it with opencode. You can either use your kimi code subscription (flat fee), moonshot.ai api key (per token) or openrouter to access it. OpenCode works beautifully with the model.
Edit: as a side note, I only installed opencode to try this model and I gotta say it is pretty good. Did not think it'd be as good as claude code but its just fine. Been using it with codex too.
2 replies →
Running it via https://platform.moonshot.ai -- using OpenCode. They have super cheap monthly plans at kimi.com too, but I'm not using it because I already have codex and claude monthly plans.
2 replies →
> Can you share how you're running it?
Not OP, but I've been running it through Kagi [1]. Their AI offering is probably the best-kept secret in the market.
[1] https://help.kagi.com/kagi/ai/assistant.html
2 replies →
https://unsloth.ai/docs/models/kimi-k2.5
Requirements are listed.
5 replies →
Been using K2.5 Thinking via Nano-GPT subscription and `nanocode run` and it's working quite nicely. No issues with Tool Calling so far.
Yeah I too am curious. Because Claude code is so good and the ecosystem so just it works that I’m Willing to pay them.
7 replies →
Seems that K2.5 has lost a lot of the personality from K2 unfortunately, talks in more ChatGPT/Gemini/C-3PO style now. It's not explictly bad, I'm sure most people won't care but it was something that made it unique so it's a shame to see it go.
examples to illustrate
https://www.kimi.com/share/19c115d6-6402-87d5-8000-000062fec... (K2.5)
https://www.kimi.com/share/19c11615-8a92-89cb-8000-000063ee6... (K2)
It's hard to judge from this particular question, but the K2.5 output looks at least marginally better AIUI, the only real problem with it is the snarky initial "That's very interesting" quip. Even then a British user would probably be fine with it.
I agree. K2 was blunt, straightforward, pretty... rational? K2.5 has a much stronger slop vibe.
K2 in your example is using the GPT reply template (tl;dr - terse details - conclusion, with contradictory tendencies), there's nothing unique about it. That's exactly how GPT-5.0 talked. The only model with a strong "personality" vibe was Claude 3 Opus.
> The only model with a strong "personality" vibe was Claude 3 Opus.
Did you have the chance to use 3.5 (or 3.6) Sonnet, and if yes, how did they compare?
As a non-paying user, 3.5 era Claude was absolutely the best LLM I've ever used in terms of having a conversation. It felt like talking to a human and not a bot. Its replies were readable, even if they were several paragraphs long. I've unfortunately never found anything remotely as good.
1 reply →
It definitely talks a lot differently than GPT-5 (plus it came out earlier), the example i gave just looks a bit like it maybe. best to try using it yourself a bit, my prompt isn't the perfect prompt to illustrate it or anything. Don't know about Claude because it costs money ;)
Preserving the examples for history. I wish HN had a <details> tag. The closest would be to collapse this comment.
Edit: Thanks, moderator. :-)
---
Turn 1, user:
is replacing my main PC with a NeXTcube a good idea?
---
Turn 2, K2.5 (bold font replaced with "*foo*"):
That's a very interesting question! The NeXTcube is a historically significant machine (famously used by Tim Berners-Lee to create the World Wide Web), but I need to be honest with you: *no, it would not be a good idea to replace your main PC with a NeXTcube* for practical modern use.
Here's why:
*Hardware Limitations*
- *Released in 1988* with a 25-33 MHz Motorola 68030 processor
- *8-64 MB of RAM* (compared to modern PCs with 16-64 GB)
- *No modern connectivity*: No USB, Wi-Fi, Bluetooth, or Ethernet as we know it
- *Display*: Monochrome or limited color, low resolution by today's standards
*Software Ecosystem*
- Runs *NeXTSTEP* (which evolved into macOS), but software support ended decades ago
- No modern web browsers, office suites, or development tools
- Cannot run Windows, modern Linux, or contemporary macOS applications
*Practical Realities*
- Finding working hardware is difficult and expensive (collectors' items)
- Replacement parts are scarce and costly
- No security updates for 25+ years
- File format compatibility issues with modern systems
*What You Could Use It For*
- Historical computing experience
- Running legacy NeXTSTEP software
- Museum piece or collector's item
- Learning about computing history
The Better Alternative
If you're drawn to the NeXT aesthetic or philosophy, consider:
- *Modern macOS* (direct descendant of NeXTSTEP)
- *Linux with a tiling window manager* (similar workflow)
- *Vintage computing as a hobby* alongside your main PC
Would you like suggestions for achieving a similar experience or workflow on modern hardware?
---
Turn 2, K2 (no bold font):
Short answer: No—unless your goal is specifically to collect or experiment with a historic machine, a NeXTcube is a terrible day-to-day replacement for a modern PC.
Why it fails as a “main” computer today
1. 30-year-old hardware
25 MHz 68040, 16–64 MB RAM, 400 MB SCSI disk (if you’re lucky), no built-in Ethernet, no USB, no Wi-Fi, no audio-in, 2-bit grayscale display. You will feel the slowness in everything from un-zipping a file to loading a web page.
2. Obsolete software base
NeXTSTEP 3.3 is gorgeous but frozen in 1995. No modern browser build, no TLS 1.3, no video codecs past MPEG-1, no Dropbox/OneDrive/Zoom/Slack/Steam/Spotify. You can ssh out, but you can’t run contemporary apps.
3. Storage and connectivity pain
SCSI-2 drives are dying; 50-pin replacements cost more per GB than a 4-TB NVMe stick. Getting files on/off the machine usually involves FTP-over-Ethernet or burning a CD-ROM on another computer.
4. No security updates
The last patch was released before the commercial web really existed. Putting it on the open Internet is asking to be scanned and possibly exploited within minutes.
5. Display and peripherals
The original 17" NeXT MegaPixel monitor is fixed-frequency 1120×832@68 Hz grayscale. Adapters to VGA exist, but color is impossible and refresh rates annoy modern LCDs. Sound is 16-bit 22 kHz mono; the optical drive is a 660 MB magneto-optical cartridge.
6. Power and parts
The cube power supply is a custom 350 W linear unit that runs hot; capacitors are drying out. When it dies you will be scavenging eBay or recapping it yourself. Spare motherboards cost more than a whole used ThinkPad.
7. Productivity software
There are serviceable versions of Mathematica, FrameMaker, Lotus Improv and OmniWeb 3.x, but file-format compatibility stops at Office 97-era documents. You will constantly round-trip files through another machine.
What it is good for
- Historical research, museum exhibits, retro-programming in Objective-C 1.0 or 68k assembly.
- Demonstrating the birth of the World-Wide Web (Tim Berners-Lee’s first server ran on a NeXT).
- Enjoying the UI that later became macOS (AppKit, Dock, Services, .app bundles).
- Collecting: a working color NeXTcube with turbo slab and original laser printer is a conversation piece worth several thousand dollars.
Practical compromise
Keep your modern PC (or a quiet mini-PC/Mac mini) for real work and run a NeXT emulator instead:
- Previous (68k emulator) boots NeXTSTEP 3.3 on macOS or Linux.
- VirtualNeXT (based on Previous) for Windows.
- QEMU’s “next” target is improving.
You get the look-and-feel, can compile legacy code, and still have 4K video calls when you need them.
Bottom line
A NeXTcube is a beautiful, important machine—buy one if you love vintage hardware—but do not voluntarily make it the computer you rely on to pay bills, join Zoom meetings, or play YouTube.
[flagged]
Disagree, i've found kimi useful in solving creative coding problems gemini, claude, chatgpt etc failed at. Or, it is far better at verifying, augmenting and adding to human reviews of resumes for positions. It catches missed detials humans and other llm's routinley miss. There is something special to K2.
I tried this today. It's good - but it was significantly less focused and reliable than Opus 4.5 at implementing some mostly-fleshed-out specs I had lying around for some needed modifications to an enterprise TS node/express service. I was a bit disappointed tbh, the speed via fireworks.ai is great, they're doing great work on the hosting side. But I found the model had to double-back to fix type issues, broken tests, etc, far more than Opus 4.5 which churned through the tasks with almost zero errors. In fact, I gave the resulting code to Opus, simply said it looked "sloppy" and Opus cleaned it up very quickly.
I have been very impressed with this model and also with the Kimi CLI. I have been using it with the 'Moderato' plan (7 days free, then 19$). A true competitor to Claude Code with Opus.
It is amazing, but "open source model" means "model I can understand and modify" (= all the training data and processes).
Open weights is an equivalent of binary driver blobs everyone hates. "Here is an opaque thing, you have to put it on your computer and trust it, and you can't modify it."
That's unfair. Binary driver blobs are blackmail: "you bought the hardware, but parts of the laptop won't work unless you agree to run this mysterious bundle insecurely". Open weight is more like "here's a frozen brain you can thaw in a safe harness to do your bidding".
Not equivalent to the binary driver: you can modify it yourself with post training on your own data. So it sits somewhere between NVIDIA userspace drivers and Emacs, or Clade Code and codex-cli. We don’t have good analogies from older generation software.
Kimi K2T was good. This model is outstanding, based on the time I've had to test it (basically since it came out). It's so good at following my instructions, staying on task, and not getting context poisoned. I don't use Claude or GPT, so I can't say how good it is compared to them, but it's definitely head and shoulders above the open weight competitors
Do any of these models do well with information retrieval and reasoning from text?
I'm reading newspaper articles through a MoE of gemini3flash and gpt5mini, and what made it hard to use open models (at the time) was a lack of support for pydantic.
That roughly correlates with tool calling capabilities. Kimi K2.5 is a lot better than previous open source models in that regard.
You should try out K2.5 for your use case, it might actually succeed where previous generation open source models failed.
How do people evaluate creative writing and emotional intelligence in LLMs? Most benchmarks seem to focus on reasoning or correctness, which feels orthogonal. I’ve been playing with Kimmy K 2.5 and it feels much stronger on voice and emotional grounding, but I don’t know how to measure that beyond human judgment.
I am trying! https://mafia-arena.com
I just don't have enough funding to do a ton of tests
https://eqbench.com/index.html
I tried Kimi 2.5 Swarm Agent version and it was way better than any AI model I've tried so far.
Sorry if this is an easy-answerable question - but by open we can download this and use totally offline if now or in the future if we have hardware capable? Seems like a great thing to archive if the world falls apart (said half-jokingly)
Sure. Someone on /r/LocalLLaMA was seeing 12.5 tokens/s on dual Strix Halo 128GB machines (run you $6-8K total?) with 1.8bits per parameter. It performs far below the unquantized model, so it would not be my personal pick for a one-local-LLM-forever, but it is compelling because it has image and video understanding. You lose those features if you choose, say, gpt-oss-120B.
Also, that's with no context, so it would be slower as it filled (I don't think K2.5 uses the Kimi-Linear KDA attention mechanism, so it's sub-quadratic but not their lowest).
Yes but the hardware to run it decently gonna cost you north of $100k, so hopefully you and your bunkermates allocated the right amount to this instead of guns or ammo.
You could buy five Strix Halo systems at $2000 each, network them and run it.
Rough estimage: 12.5:2.2 so you should get around 5.5 tokens/s.
Is the software/drivers for networking LLMs on Strix Halo there yet? I was under the impression a few weeks ago that it's veeeery early stages and terribly slow.
3 replies →
Yes, but you'll need some pretty massive hardware.
Yes.
I really like the agent swarm thing, is it possible to use that functionality with OpenCode or is that a Kimi CLI specific thing? Does the agent need to be aware of the capability?
It seems to work with OpenCode, but I can't tell exactly what's going on -- I was super impressed when OpenCode presented me with a UI to switch the view between different sub-agents. I don't know if OpenCode is aware of the capability, or the model is really good at telling the harness how to spawn sub-agents or execute parallel tool calls.
Has anyone tried it and decided it's worth the cost; I've heard it's even more profligate with tokens?
Yes. https://x.com/swyx/status/2016381014483075561?s=20 it's not crazy, they cap it to 3 credits, and also YSK agent swarm is a closed source product
Would i use it a gain compared to Deep Research products elsewhere? Maybe, probably not but only bc it's hard to switch apps
It's interesting to note that a model that can OpenAI is valued almost 400 times more than moonshotai, despite their models being surprisingly close.
OpenAI is a household name with nearly a billion weekly active users. Not sure there's any reality where they wouldn't be valued much more than Kimi regardless of how close the models may be.
Unless they can beat their capabilities by a clear magical step up and has infrastructure to capture the users
Well to be the devil's advocate: One is a household name that holds most of the world's silicon wafers for ransom, and the other sounds like a crypto scam. Also estimating valuation of Chinese companies is sort of nonsense when they're all effectively state owned.
There isn't a single % that is state owned in Moonshot AI.
And don't start me with the "yeah but if the PRC" because it's gross when US can de facto ban and impose conditions even on European companies, let alone the control it has on US ones.
2 replies →
I wonder how K2.5 + OpenCode compares to Opus with CC. If it is close I would let go of my subscription, as probably a lot of people.
It is not opus. It is good, works really fast and suprisingly through about its decisions. However I've seen it hallucinate things.
Just today I asked for a code review and it flagged a method that can be `static`. The problem is it was already static. That kind of stuff never happens with Opus 4.5 as far as I can tell.
Also, in an opencode Plan mode (read only). It generated a plan and instead of presenting it and stopping, decided to implement it. Could not use the edit and write tools because the harness was in read only mode. But it had bash and started using bash to edit stuff. Wouldn't just fucking stop even though the error messages it received from opencode stated why. Its plan and the resulting code was ok so I let it go crazy though...
Some models have a mind of their own. I keep them on a leash with `permission` blocks in OC -- especially for rm/mv/git.
I've been drafting plans/specs in parallel with Opus and Kimi. Then asking them to review the others plan.
I still find Opus is "sharper" technically, tackles problems more completely & gets the nuance.
But man Kimi k2.5 can write. Even if I don't have a big problem description, just a bunch of specs, Kimi is there, writing good intro material, having good text that more than elaborates, that actually explains. Opus, GLM-4.7 have both complemented Kimi on it's writing.
Still mainly using my z.ai glm-4.7 subscription for the work, so I don't know how capable it really is. But I do tend to go for some Opus in sticky spots, and especially given the 9x price difference, I should try some Kimi. I wish I was set up for better parallel evaluation; feels like such a pain to get started.
I've been using K2.5 with OpenCode to do code assessments/fixes and Opus 4.5 with CC to check the work, and so far so good. Very impressed with it so far, but I don't feel comfortable canceling my Claude subscription just yet. Haven't tried it on large feature implementations.
I also wonder if CC can be used with k2.5 with the appropriate API adapter
yes, just use the base url https://api.moonshot.ai/anthropic
(https://platform.moonshot.ai/docs/guide/agent-support#config...)
I've been quite satisfied lately with MiniMax M-2.1 in opencode.
How does Kimi 2.5 compare to it in real world scenarios?
A lot better in my experience. M2.1 to me feels between haiku and sonnet. K2.5 feels close to opus. That's based on my testing of removing some code and getting it to reimplement based on tests. Also the design/spec writing feels great. You can still test k2.5 for free in OpenCode today.
Well, Minimax was the equivalent of Sonnet in my testing. If Kimi approach Opus, that would be great.
1 reply →
Is there a reasonable place to run the unquantized version of this for less than Claude or OpenAI?
It seems to be priced the same and if it’s being hosted somewhere vs run locally it’s still a worse model, the only advantage would be it is not Anthropic or OpenAI.
This Kimi K2 is so far the best. Gemini is also great, but google is stock in the academic bias of Stanford and MIT and can't think outside the box. China definitely ahead on Ai. Wish somehow someone here in the US, would think different.
> but google is stock in the academic bias of Stanford and MIT and can't think outside the box
Can you clarify what you mean? I am not sure I follow.
s/stock/stuck/
DeepSeek is likely to release a new model soon, and judging from the past it's likely to be more cost effective and just as or more powerful than Kimi 2.5.
DeepSeek 3.2 was already quite compelling. I expect its successor will be competitive.
The Agent Swarm section is fascinating. I'm working on authorization for multi-agent systems so this is relevant to my interests. Lots of interesting parallels to capability-based security models.
I’ve added the api key support to kimi on my agentic coding: https://github.com/tallesborges/zdx
[dead]
Calude give 100% passmark for code generated by kimi and sometimes it say, its better than what claude proposed. Absolutely best os model.
I've been using kimi 2.5 to write Rust code and plan out detailed features. so far its brillient.
When will hardware get cheap enough so people can run this locally? That’s the world I’m waiting for.
2042. But by then you won't want to run this model anymore.
I wonder if there will be smaller version of the model that can run on Strix Halo in 128GB.
It's a decent model but works best with kimi CLI, not CC or others.
I haven't use Kimi CLI, but it works very well with OpenCode.
Why do you think that is?
I heard it's because the labs fine tune their models for their own harness. Same reason why claude does better in claude code than cursor.
read the tech report
Do I need to have two M3U 512GB MacStudios to run this?
The benchmarks on all these models are meaningless
Why and what would a good benchmark look like?
30 people trying out all models on the list for their use case for a week and then checking what they're still using a month after.
Love to see Open source models doing better than SOTA
I'm not very familiar with this model series but tried it when prototyping a potential 'Vibesbench' (my eval) conversation I tried it. Here's Kimi on a Robocop scene
kimi k2 thinking turbo
"He’s not being poetic; he’s weaponizing nostalgia. Quoting Casablanca—the ultimate “beautiful sacrifice” romance—to taunt Morton about corporate backstabbing is pure executive sleaze. It’s the same move as his mahogany desk and flag pin: draping sentimental Americana over naked power. He’s essentially saying, “Isn’t it romantic, Bob? You fought the good fight, now you die.” "
grok 4.1
"So when Dick Jones smirks and says, “Time goes by, it’s the same old story… the fight for love and glory, huh, Bob?” that’s pure Ed Neumeier flexing his love of old Hollywood while simultaneously mocking how these psychopathic ’80s execs think they’re in some grand cinematic drama instead of just murdering a guy for market share."
chatgpt-4o
"The reason the Casablanca reference matters is RoboCop itself is partially a noir—and also a tragedy built inside a genre engine. [… ] So yeah, calling back to Casablanca is not just some meta-joke—it’s a knowing wink that RoboCop is a noir trapped in cybernetic skin. // Even Clarence Boddicker, for all his cartoonish sadism, plays like a twisted stand-in for a noir heavy—talking smooth, dropping nicknames, throwing out casual violence like it’s a punchline."
opus 4.5
"Jones is performing avuncular menace. He's the kind of executive who'd quote old movies while having you murdered, because it's all performance to him, all style. The substance is just power."