← Back to context

Comment by dnw

19 hours ago

I have been using Copilot, Cursor, then CC for a little more than a year now. I have written code with teams using these tools and I am writing mostly for myself now. My observations have been the following:

1) These tools obviously improved significantly over the past 12 months. They can churn out code that makes sense in the context of the codebase, meaning there is more grounding to the codebase they are working on as opposed to codebases they have been trained on.

2) On the surface they are pretty good at solving known problems. You are not going to make them write well-optimized renderer or an RL algorithm but they can write run-of-the-mill business logic better _and_ faster than I can-- if you optimize for both speed of production and quality.

3) Out of the box, their personality is to just solve the problem in front of them as quickly as possible and move on. This leads them to make suboptimal decisions (e.g. solving a deadlock by sleeping for 2 seconds, CC Opus 4.5 just last night). This personality can be altered with appropriate guidance. For example, a shortcut I use is to append "idiomatic" to my request-- "come up with an idiomatic solution" or "is that the most idiomatic solution we can think of." Similarly when writing tests or reviewing tests I use "intent of the function under test" which makes the model output better solution or code.

4) These models, esp. Opus 4.5 and GPT 5.2, are remarkable bug hunters. I can point at a symptom and they come away with the bug. I then ask them to explain me why the bug happens and I follow the code to see if it's true. I have not come across a bad bug, yet. They can find deadlocks and starvations, you then have to guide them to a good fix (see #3).

5) Code quality is not sufficient to create product quality, but it is often necessary to sustain it. Sustainability window is shorter nowadays. Therefore, more than ever, quality of the code matters. I can see Claude Code slowly degrading in quality every single day--and I use it every single day for many hours. As much as it pains me to say this, compared to Opencode, Amp, and Toad I can feel the "slop" in Claude Code. I would love to study the codebases of these tools overtime to measure their quality--I know it's possible for all but Claude Code.

6) I used to worry I don't have a good mental model of the software I build. Much like journaling, I think there is something to be said about the process of writing/making actually gives you a very precise mental model. However, I have been trying to let that go and use the model as a tool to query and develop the mental model post facto. It's not the same but I think it is going to be the new norm. We need tooling in this space.

7) Despite your own experiences with these tools it is imperative that they be in your toolbox. If you have abstained from them thus far, perhaps best way to get them incorporated is by starting to use them for attending to your toil.

8) You can still handcraft code. There is so much fun, beauty and pleasure it in to deny doing it. Don't expect this to be your job. This is your passion.

> Despite your own experiences with these tools it is imperative that they be in your toolbox.

Why is it imperative? Whenever I read comments like this I just think the author is cynically drumming up hype because of the looming AI bubble collapse.

  • Fair question. It is "imperative" for two reasons. The first, despite having rough edges now, I find these tools be actually useful so they are here to stay. The second, I think most developers will use them and make them part of their toolchain. So, if one wants to be in parity with their peers then it stands to reason they adopt these tools as well.

    In terms of bubbles: Bubbles are economic concepts and they will burst but the underlying technology find its market. There are plenty of good open source models and open source projects like OpenCode/Toad that support them. We can use those without contributing (too much) to the bubble.

  • There's a financial AI bubble for sure - that's pretty much a mainstream opinion nowadays. But that's an entirely different thing from AI itself bubble-collapsing.

    If you truly believe AI is simply going to collapse and disappear, you are deep in some serious cope and are going to be unpleasantly surprised.