← Back to context

Comment by Swizec

6 hours ago

> I'm either in a minority or a silent majority. Claude Code surpasses all my expectations.

I looked at some stats yesterday and was surprised to learn Cursor AI now writes 97% of my code at work. Mostly through cloud agents (watching it work is too distracting for me)

My approach is very simple: Just Talk To It

People way overthink this stuff. It works pretty good. Sharing .md files and hyperfocusing on various orchestrations and prompt hacks of the week feels as interesting as going deep on vim shortcuts and IDE skins.

Just ask for what you want, be clear, give good feedback. That’s it

I agree it works nicely for me. From my experience it’s not realistic to expect one-shot each time. But asking it to build chunks and entering a review cycle with nudging works well. Once I changed my mindset from it « didn’t do a one-shot so it’s crap » and took it as an iterative tool that build pieces that I assemble it’s been working nicely without external frameworks or anything. Plan-review, iterate, split, build, review iterate

How do you collect these stats?

Is it by characters human typed vs AI generated, or by commit or something?

  • > How do you collect these stats?

    Cursor dashboard. I know they're incentivized to over-estimate but feels directionally accurate when I look at recent PRs.

Are you mostly using the Composer model?

  • > Are you mostly using the Composer model?

    Don’t really think about it. I think when I talk to it through Slack, cursor users codex, in my ide looks like it’s whatever highest claude. In Github comments, who even knows