Comment by mritchie712

1 month ago

This is a great question and easy to answer with the context you provided.

I don't think your poor experience is because of you, it's because of your codebase. Cursor works worse (in my experience) on larger codebases and seems particularly good at JS (e.g. React, node, etc.).

Cursor excels at things like small NextJS apps. It will easily work across multiple files and complete tasks that would take me ~30 minutes in 30 seconds.

Trying again in 6 months is a good move. As models get larger context windows and Cursor improves (e.g. better RAG) you should have a better experience.

Cursor's problem isn't bigger context, it's better context.

I've been using it recently with @nanostores/react and @nanostores/router.

It constantly wants to use router methods from react-router and not nanostores so I am constantly correcting it.

This is despite using the rules for AI config (https://docs.cursor.com/context/rules-for-ai). It continually makes the same mistakes and requires the same correction likely because of the dominance of react-router in the model's training. That tells me that the prompt it's using isn't smart enough to know "use @nanostores/router because I didn't find react-router".

I think for Cursor to really nail it, the base prompt needs to have more context that it derived from the codebase. It should know that because I'm referencing @nanostores/router, to include an instruction to always use @nanostores/router.

  • I don't think you can fix this with a little bit of prompting with current models, since they seem clearly heavily biased towards react-router (a particular version even, iirc from a year ago).

    It might help if you included API reference/docs for @nanostores/router in the context, but I'm not sure even that would fix it completely.