Comment by sippeangelo
14 hours ago
That's a hefty payday for a model that barely functions! Every time I run out of API credits and get kicked back to Composer 2 I feel like I'm better off just packing up for the rest of the month.
I feel like we're finally at a point where you don't have to constantly argue with and constantly babysit coding models, which makes it even more frustrating when you're suddenly forced to deal with one that ignores your instructions and gets stuck in thinking loops again.
I suspect it's the vast troves of training data rather than any tech that Cursor possesses that SpaceX is after...
Cursor is still the best coding environment and hardness. It's actually not really close. They are so good that they actually made Gemini usable.
The problem is they can't compete with Anthropic and OpenAI because they can't sell Opus and GPT at a discount to subscribers like OpenAI and Anthropic do with their subscriptions.
So they either need to build a competing model or slowly die.
I personally disagree on the first point. Claude code in a terminal with vim is much nicer. I just don’t see the need for the bloat of an IDE when the CLI versions work so damn well now.
They have Cursor CLI.
Cursor is essentially all the Claude Code products but without the horrible bugs of Claude Code products.
You can transfer from CLI to web and it actually works.
And Claude can use CLI too. It's the perfect environment for coding agents.
That's why I'm so puzzled to why Composer doesn't work better when they have the ability to train it from scratch for their agent harness! Yet it still fails to apply edits, gets confused why it can't call some commands in its sandbox, the list goes on...
They seemed to be doing fine with Kimi distillation. Not speaking from experience though, I prefer to use my editor.
Bet they will become tied to grok pretty soon.
> They are so good that they actually made Gemini usable
I think Gemini is best model out there, and it's not Cursor who you should praise. I use it with jetbrains junie. Vastly cheaper than claude, faster, produces better quality code, actually listens to your instructions, more accurate. I'm sure claude code cli has some cli magic that I'm missing out on, but having everything just work in a nice IDE (and llm to actually understand your symbol table) is like magic.
Are you using Gemini 3.1 Pro? Subscription or paying for the tokens?
1 reply →
I doubt they're buying it for Composer, I imagine they're buying it for the agent harness. It's arguably the best non-Anthropic agentic coding harness, and you get _all the models_ for one subscription price.
Maybe vertical integration is the main business case.
A controlled environment to determine effort and token usage, and to get plenty of exclusive training on code.
It could end up making sense. Idk if they needed to offer 60B though.
I'm not willing to give them the benefit of the doubt. I think this is purely Elon trying to take a pot shot at Anthropic.
JetBrains is crying in the corner...
I've subscribed to Jetbrains all product for years. If the agent coding is going to be the next wave. Jetbrains is really behind. Even Microsoft offer better agent coding with VScode and Github copilot cli.
1 reply →
Jetbrains has gone so far downhill
I honestly can’t believe how poorly JetBrains has done. I used to love PyCharm but now it’s so far behind. I still use DataGrip but it is absolute dogshit when it comes to agentic coding.
13 replies →
Cursor is great. I was using it up until recently. Then I switched oh my pi, and honestly I haven't looked back. I've also heard great things about open code.
I actually really like Composer 2. For my use case, between the planning tool, and getting it to ask a lot of clarifying questions, I regularly get very good results. I'm not doing anything complex though; mostly staying in the lane of very common web app type code.
It definitely feels sufficient for questions and planning, but it is surprisingly lacking in the actual coding department once you go for edits that need changes in multiple files. Which is surprising considering they should have been able to train it on their own harness!
Composer 2 is really good for me too.
They still just bought access to all the code you've ever fed into the model...
Cursor very reasonably had a “no retention” checkbox available to everyone, including those on free plans.
I'm sure those work as well as the "don't collect my data" checkboxes too.
1 reply →
Is Composer 2 a bad model because Cursor are bad at training models, or because they are compute constrained? This deal will provide the answer to that question.
[flagged]