Comment by mritchie712
1 month ago
I think you're downplaying how well Cursor is doing "code generation" relative to other products.
Cursor can do at least the following "actions":
* code generation
* file creation / deletion
* run terminal commands
* answer questions about a code base
I totally agree with you on ETL (it's a huge part of our product https://www.definite.app/), but the actions an agent takes are just as tricky to get right.
Before I give Cursor, I often doubt it's going to be able to pull it off and I constantly impressed by how deep it can go to complete a complex task.
This really puzzles me. I tried Cursor and was completely underwhelmed. The answers it gave (about a 1.5M loc messy Spring codebase) were surface-level and unhelpful to anyone but a Java novice. I get vastly better work out of my intern.
To add insult to injury, the IntelliJ plugin threw spurious errors. I ended up uninstalling it and marking my calendar to try again in 6 months.
Yet some people say Cursor is great. Is it something about my project? I can't imagine how it deals with a codebase that is many millions of tokens. Or is it something about me? I'm asking hard questions because I don't need to ask the easy ones.
What are people who think Cursor is great doing differently?
My tinfoil hat theory is that Cursor deploys a lot of “guerilla marketing” with influencers on Twitter/LinkedIn etc. When I tried it, the product was not good (maybe on par with Copilot) but you have people on social media swearing by it. Maybe it just works well for specific types of web development, but I came away thoroughly unimpressed and suspicious that some of the “word of mouth” stuff on them is actually funded by them.
> Maybe it just works well for specific types of web development, but I came away thoroughly unimpressed and suspicious that some of the “word of mouth” stuff on them is actually funded by them.
You can pretty easily disprove this theory by noting the number of extremely well-known, veteran software developers who claim that Cursor (or other kinds of LLM-based coding assistants) are working for them.
Very doubtful any one of them can be bought off, but certainly not all of them.
I tell everyone cursor is awesome because for my use cases cursor is awesome.
I've never thrown 1.5M LoC at it, but for smaller code bases (10k LoC) it is amazing at giving summaries and finding where in the code something is being done.
This is a great question and easy to answer with the context you provided.
I don't think your poor experience is because of you, it's because of your codebase. Cursor works worse (in my experience) on larger codebases and seems particularly good at JS (e.g. React, node, etc.).
Cursor excels at things like small NextJS apps. It will easily work across multiple files and complete tasks that would take me ~30 minutes in 30 seconds.
Trying again in 6 months is a good move. As models get larger context windows and Cursor improves (e.g. better RAG) you should have a better experience.
Cursor's problem isn't bigger context, it's better context.
I've been using it recently with @nanostores/react and @nanostores/router.
It constantly wants to use router methods from react-router and not nanostores so I am constantly correcting it.
This is despite using the rules for AI config (https://docs.cursor.com/context/rules-for-ai). It continually makes the same mistakes and requires the same correction likely because of the dominance of react-router in the model's training. That tells me that the prompt it's using isn't smart enough to know "use @nanostores/router because I didn't find react-router".
I think for Cursor to really nail it, the base prompt needs to have more context that it derived from the codebase. It should know that because I'm referencing @nanostores/router, to include an instruction to always use @nanostores/router.
1 reply →
Its for novices and youtube AI hucksters. Its the coding equivalent of vibrating belts for weight loss.
This is my read as well
So isn’t cursor just a tool for Claude or ChatGpt to use? Another example would be a flight booking engine. So why can’t an AI just talk direct to an IDE? This is hard as the process has changed, due to the human needing to be in the middle.
So Isn’t AI useless without the tools to manipulate?
I’m very “bullish” on AI in general but find cursor incredibly underwhelming because there is little value add compared to basically any other AI coding tool that goes beyond autocomplete. Cursor emphatically does not understand large codebases and smaller (few file codebases) can just be pasted into a chat context in the worst case.
Is it really that different to Claude with tools via MCP, or my own terminal-based gptme? (https://github.com/ErikBjare/gptme)
I thought it's basically a subset of Aider[0] bolted into a VS Code fork, and I remain confused as to why we're talking about it so much now, when we didn't about Aider before. Some kind of startup-friendly bias? I for one would prefer OSS to succeed in this space.
--
[0] - https://aider.chat/
I tried aider and had problems having it update code in existing files. Aider uses a search and replace pattern to update existing code. So you often end up with
Of course aider will try to apply this kind of patch even when the search pattern matches several occurrences in the target file. Looking at the Github issues, this is a problem that was brought up several times and was never fixed because apparently it's not even problematic. I moved to cursor, which doesn't have this problem, and never looked back.
2 replies →
Somewhat cynically, maybe because there's VC in Cursor.
https://techcrunch.com/2024/12/19/in-just-4-months-ai-coding...
Thanks for spreading the word. I hadn’t heard of Aider before and I’m now going to give it a try today.
Aider is the single best tool I’ve tried. And I’d never heard of it until like 2 weeks ago when someone mentioned it here. I love aider.
4 replies →
This is why I was asking. My own gptme is also just slightly different from Aider and has been around roughly as long.