← Back to context

Comment by fotcorn

2 years ago

I've been using Cursor quite a bit on my two current personal projects in Python and C++ respectively, and I think it's quite useful:

I haven't written a unit test by hand since I started using Cursor. Because it has the full codebase as context, it can easily generate unit tests. I would say the code is correct around 50% of the time on the first try? Otherwise, you can quickly submit a follow-up prompt to fix stuff. Obviously, it's still GPT-4 in the background, so hallucinations still happen.

I tried adding a simple feature to my interpreted custom programming language using the chat feature, giving the full codebase as context: The programming language already supported indexing into arrays, but not strings. I asked in chat what I need to change to make this feature work. It generated code and showed me in which files I have to add or modify existing code. Very impressive. Again, the code wasn't perfect, I had to do some minor adjustments like method names, but it understood my code, gave the right files to modify, and also proposed code that was almost correct. There is a beta feature that applies the proposed code changes to the codebase, but that did not work yet when I tried it (I am using an OpenAI API key, maybe it's related to that).

Overall, I think it is very helpful, especially when I am tired from $dayjob, but still want to do some coding in the evening. For me, it's much easier to formulate what I want to code in human language, instead of writing the code "by hand".

Caveat: Even if you use the OpenAI API key, ALL your code is sent to the cursor.so servers for prompt generation, indexing etc., even with "local mode" on. "Local mode" just changes a flag that cursor won't save code on their servers, but it's still sent there for processing. I assume this is a deal-breaker for most companies for now.