Ask HN: Why aren't more developers using AI tools?

11 hours ago

I’ve worked in both corporate and startup settings and keep noticing that many talented developers I meet don’t use AI tools at all — not even for small things like boilerplate code, tests, or docs.

Why? Concerns about security or IP? Don’t trust the quality? Slows you down instead of helping? Just don’t see the value?

If you don’t use AI tools (or tried and stopped), I’d love to hear your reasons. If you do use them, what convinced you?

My productivity is not significantly limited by my ability to generate code, so I see little value in tools which offer to accelerate the process. (I don't use autocomplete, either; I type quickly, and prefer my editor to stay out of the way as much as possible.) I spend far more time reading, discussing, testing, and thinking than I do writing.

The people who rave about AI tools generally laud their facility with the tedious boilerplate involved in typical web-based business applications, but I have spent years steering my career away from such work, and most of what I do is not the sort of thing you can look up on StackOverflow. Perhaps there are things an AI tool could do to help me, and perhaps someday I will be curious enough to try; but for now they seem to solve problems I don't really have, while introducing difficulties I would find annoying.

Well for one my editor doesn’t support plugins (yet)!. But I have found that learning my editor well enough removed my need for the pesky edits / boilerplate writing offerings of AI. Other than that, I don’t trust it to write anything big, and don’t need it to write anything small.

I’ve tried it a few times and it’s decent for writing unit tests but otherwise often made a mess of things. I understand there’s an art to it but I’m just not interested in putting too much effort into it. I’m always going to go through whatever it generates with a fine toothed comb, so I don’t see it saving me much time anyhow.

I have watched some very senior engineers really dive in with it though, and seemingly with a lot of success.

I use ChatGPT for small code snippets and to analyze error logs, but for more elaborate stuff I've had pretty bad luck.

Even prior to AI, I've said many times that code generation is evil [1]. I hated Ruby on Rails for this reason: people generate tons of files and then other people are stuck maintaining lots of code that they fundamentally do not understand. To a lesser extent, you also had this with IntelliJ generating large quantities of code as an attempt to make Java a little less irritating.

I once worked for a company that would generate Protobuf parsers and then monkey-patch extra functionality on top of it, and it was an extremely irritating and error-prone process.

The damage for this used to be limited to very specific code generation tools, but with LLMs, there's effectively no limit to how much code can be generated. This isn't inherently an issue, but like other code generation tools, it runs the risk of creating a lot of shitty code that no one actually understands. It's one thing if this is it's something low-stakes like a game or a TODO list app, but it's much more concerning with regards to banking and medical applications: if a lazy developer generates a large amount of code that looks more or less correct and it seems to more or less work but doesn't understand it, shit can get serious. I certainly would really prefer that the people writing the firmware for an EKG meter actually understand the the code they're writing.

[1] At least code that you're expected to edit and maintain. My opinion changes for stuff that's just an optimization detail.

Corporate forbids AI tools due to IP concerns. When I Google for syntax or something I do sometimes look at the AI result though.

On personal projects I usually use AI (Zed Zeta) for tab completion, although sometimes I get annoyed by it interfering with the UI or my cursor and turn it off. I will also occasionally feed a bug or error into Gemini if I'm really stuck - this only works occasionally but is worth a shot.

Every couple of months I try the current hotness (Copilot/Cursor/Gemini Code/etc.) for a small or greenfield project; if I stick with the project for more than a few days I inevitably find the AI can't do anything except the most common possible thing, and turn it off.

I think the disconnect is in my ability to explain to the model in English what I want it to create. If it's something common, I can give it the gist and its assumptions as to the rest will be valid, but if it's something difficult my few paragraphs of instruction and outlining probably just doesn't provide enough specificity. This is maybe the same problem low-code tools run into: the thing that fully defines what I want to happen is the code itself; any abstraction you build on top of that will be at least as complex, unless I happen to want all the defaults.

On top of that, as others have said: writing all the code yourself is a good way to ensure you know how it all works, and learn about anything new you need to use. This reality reduces my motivation to rely on AI in the first place, because even if it works for now I'm expecting to get hurt by it down the road.

Honestly, I’m surprised to learn there are developers who don’t use AI tools at all in 2025.

I’ve been a software engineer for about 10 years, and these days Claude Code is basically my coding buddy and documentation assistant(and yes, I’m one of those devs who doesn’t love writing docs or design docs encouraged by big corporate) — and I use ChatGPT daily for research, quick explanations, or brainstorming. At this point it feels weird to work without them.

Though they’re not as magic as I want. I have to do the followings with AI: - review everything AI spits out, because it can sound confident but be dead wrong - Clear instruction and rules make a huge difference (like claude.local.md) yes, writing them can feel like doing documentation ...

Even with those quirks, the net gain is huge for me, I just need to learn how to mentor them — AI saves time on documentation, debugs, gives me new ideas, and helps on side projects.

From my perspective, AI is unstoppable. We’re at the start of a new era in software development, and within 5 years it’s going to replace many different roles in our circle or other indurstry : )