How do you stay in the loop with CC? I find myself using it for the exact opposite use case: large features or greenfield projects where I can just let it rip autonomously for a while. I find the TUI awkward for reviewing code.
I stay in the loop by reviewing each edit and telling it to do something differently asap if something was wrong.
The opposite approach is also possible: just hit the option (or .json setting) to accept all edits. Then you'd review the persisted changes using your favorite Git tool.
I think if you use Cursor, using Claude Code is a huge upgrade. The problem is that Cursor was a huge upgrade from the IDE, so we are still getting used to it.
The company I work for builds a similar tool - NonBioS.ai. It is in someways similar to what the author does above - but packaged as a service. So the nonbios agent has a root VM and can write/build all the software you want. You access/control it through a web chat interface - we take care of all the orchestration behind the scene.
Its also in free Beta right now, and signup takes a minute if you want to give it a shot. You can actually find out quickly if the Claude code/nonbios experience is better than Cursor.
I think the path forward there is slack/teams/discord/etc integration of agents, so you can monitor and control whatever agent software you like via a chat interface just like you would interact with any other teammate.
So we tried that route - but problem is that these interfaces aren't suited for asynchronous updates. Like if the agent is working for the next hour or so - how do you communicate that in mediums like these. An Agent, unlike a human, is only invoked when you give it a task.
If you use the interface at nonbios.ai - you will quickly realize that it is hard to reproduce on slack/discord. Even though its still technically 'chat'
I'm a long-time GitHub Copilot subscriber, but I have also tested many alternatives, such as Cursor.
Recently, I tried using Claude Code with my GitHub Copilot subscription (via unofficial support through https://github.com/ericc-ch/copilot-api), and I found it to be quite good. However, in my opinion, the main difference comes down to your preferred workflow. As someone who works with Neovim, I find that a tool that works in the terminal is more appropriate for me.
Isn’t that usage a violation of ToS? In that repo there’s even an issue thread that mentions this. The way I rely on GitHub these days, losing my account would be far more than annoying.
Most of these are Anthropic models under the hood, so I think 'whatever fits your workflow best' is the main deciding factor. That's definitely Claude Code for me, and I do think there's some 'secret sauce' in the exact prompting and looping logic they use, but I haven't tried Cursor a lot to be sure.
any secret sauce in prompting etc could be trivially reverse engineered by the companies building the other agents, since they could easily capture all the prompts it sends to the LLM. If there’s any edge, it’s probably more around them fine-tuning the model itself on Claude Code tasks.
Interesting that the other vendors haven't done this "trivial" task, then, and have pretty much ceded the field to Claude Code. _Every_ CLI interface I've used from another vendor has been markedly inferior to Claude Code, and that includes Codex CLI using GPT-5.
Claude code seems like the obvious choice for someone using Vim but even in the context of someone using a graphical IDE like VSCode I keep hearing that Claude is “better” but I just can’t fathom how that can be the case.
Even if the prompting and looping logic is better, the direct integration with the graphical system along with the integrated terminal is a huge benefit, and with graphical IDEs the setup and learning curve is minimal.
You can run Claude Code in the terminal window of VS Code, and it has IDE integration so you can see diffs inline, etc. It's not fully integrated like Cursor but you get a lot of the benefits of the IDE in that way.
Letting Cursor pick the model for you is inviting them to pick the cheapest model for them, at the cost of your experience. It's better to develop your own sense of what model works in a given situation. Personally, I've had the most success with Claude, Gemini Pro, and o3 in Cursor.
I use Cursor with Claude Code running in the integrated terminal (within a dev container in yolo mode). I'll often have multiple split terminals with different Claude Code instances running on their own worktrees. I keep Cursor around because I love the code completions.
When it comes to diffs (edits), Cursor is batch-oriented, while CC suggests one edit at a time and can be steered in real time.
That's a critical feature for keeping a human in the loop, preventing big detours and token waste.
How do you stay in the loop with CC? I find myself using it for the exact opposite use case: large features or greenfield projects where I can just let it rip autonomously for a while. I find the TUI awkward for reviewing code.
I stay in the loop by reviewing each edit and telling it to do something differently asap if something was wrong.
The opposite approach is also possible: just hit the option (or .json setting) to accept all edits. Then you'd review the persisted changes using your favorite Git tool.
I think if you use Cursor, using Claude Code is a huge upgrade. The problem is that Cursor was a huge upgrade from the IDE, so we are still getting used to it.
The company I work for builds a similar tool - NonBioS.ai. It is in someways similar to what the author does above - but packaged as a service. So the nonbios agent has a root VM and can write/build all the software you want. You access/control it through a web chat interface - we take care of all the orchestration behind the scene.
Its also in free Beta right now, and signup takes a minute if you want to give it a shot. You can actually find out quickly if the Claude code/nonbios experience is better than Cursor.
I think the path forward there is slack/teams/discord/etc integration of agents, so you can monitor and control whatever agent software you like via a chat interface just like you would interact with any other teammate.
So we tried that route - but problem is that these interfaces aren't suited for asynchronous updates. Like if the agent is working for the next hour or so - how do you communicate that in mediums like these. An Agent, unlike a human, is only invoked when you give it a task.
If you use the interface at nonbios.ai - you will quickly realize that it is hard to reproduce on slack/discord. Even though its still technically 'chat'
1 reply →
I'm a long-time GitHub Copilot subscriber, but I have also tested many alternatives, such as Cursor.
Recently, I tried using Claude Code with my GitHub Copilot subscription (via unofficial support through https://github.com/ericc-ch/copilot-api), and I found it to be quite good. However, in my opinion, the main difference comes down to your preferred workflow. As someone who works with Neovim, I find that a tool that works in the terminal is more appropriate for me.
Isn’t that usage a violation of ToS? In that repo there’s even an issue thread that mentions this. The way I rely on GitHub these days, losing my account would be far more than annoying.
That’s indeed a valid concern. I set a low rate limit for copilot-api, but I’m not sure if it’s enough.
I may stop my experiment if there is any risk of being banned.
Most of these are Anthropic models under the hood, so I think 'whatever fits your workflow best' is the main deciding factor. That's definitely Claude Code for me, and I do think there's some 'secret sauce' in the exact prompting and looping logic they use, but I haven't tried Cursor a lot to be sure.
any secret sauce in prompting etc could be trivially reverse engineered by the companies building the other agents, since they could easily capture all the prompts it sends to the LLM. If there’s any edge, it’s probably more around them fine-tuning the model itself on Claude Code tasks.
Interesting that the other vendors haven't done this "trivial" task, then, and have pretty much ceded the field to Claude Code. _Every_ CLI interface I've used from another vendor has been markedly inferior to Claude Code, and that includes Codex CLI using GPT-5.
Claude code seems like the obvious choice for someone using Vim but even in the context of someone using a graphical IDE like VSCode I keep hearing that Claude is “better” but I just can’t fathom how that can be the case.
Even if the prompting and looping logic is better, the direct integration with the graphical system along with the integrated terminal is a huge benefit, and with graphical IDEs the setup and learning curve is minimal.
You can run Claude Code in the terminal window of VS Code, and it has IDE integration so you can see diffs inline, etc. It's not fully integrated like Cursor but you get a lot of the benefits of the IDE in that way.
Letting Cursor pick the model for you is inviting them to pick the cheapest model for them, at the cost of your experience. It's better to develop your own sense of what model works in a given situation. Personally, I've had the most success with Claude, Gemini Pro, and o3 in Cursor.
there is no reason to pay for cursor when claude is definitely the best coding model and you are paying essentially for just a bigger middle man.
Claude is the best agent model. Its one shot performance and large codebase comprehension both lag Gemini/GPT though.
I feel like as tool Claude code is superior to "regular code editor with ai plugin". This method of having the ai do your tasks feels like the future.
If Cursor works for you, then stick with it. Claude Code is great for terminal-based workflows. Whatever makes you more productive is the better tool.
I’m just glad we’re getting past the insufferable “use Cursor or get left behind” attitude that was taking off a year ago.
I use Cursor with Claude Code running in the integrated terminal (within a dev container in yolo mode). I'll often have multiple split terminals with different Claude Code instances running on their own worktrees. I keep Cursor around because I love the code completions.