Comment by dazhbog
19 days ago
I dont get the hype.. And I dont think we will reach peak AI coding performance any time soon.
Yes, watching an LLM spit out lots of code is for sure mesmerizing. Small tasks usually work ok, code kinda compiles, so for some scenarios it can work out.. BUT anyone serious about software development can see how piece of CRAP the code is.
LLMs are great tools overall, great to bounce ideas, great to get shit done. If you have a side project and no time, awesome.. If your boss/company has a shitty culture and you just want to get the task done, great. Got a mundane coding task, hate coding, or your code wont run in a critical environment? please, LLM that shit over 9000..
Remember though, an LLM is just a predictor, a noisy, glorified text predictor. Only when AI reaches a point of not optimizing for short term gains and has built-in long term memory architecture (similar to humans) AND can produce some linux kernel level code and size, then we can talk..
Super weird comments on this thread, to the point I would think it's brigaded or some (most) comments are straight up AI generated. Hackernews definitely changed. The tone on AI changed since a few months ago, I guess also because many people here are working on AI. Almost any new startup is AI-adjacent now. It's no surprise, cca. 120 out of 150 YC startups are AI. So there is a big push on this forum to keep the hype and sentiment going.
The hard part is always the last 20%.
I was thinking the same thing as I scrolled through. Half the comments are some AI generated homage to AI. I’ve seen Anthropic pushing so much marketing BS on X/Twitter that wouldn’t surprise me if it extended to HN.
Theres just no point arguing anymore so i just stopped. These tools can by design not work the way its advertised but if you point that out youll be hit with walls of generated "arguments" why it will be different any day now.
I have junior people on my team using Cursor and Claude, it’s not all great. Several times they’ve checked in code that also makes small yet breaking changes to queries. I have to watch out for random (unused) annotations in Java projects and then explain why the tools are wrong. The Copilot bot we use on GitHub slows down PR reviews by recommending changes that look reasonable yet either don’t really work or negatively impact performance.
Overall, I’d say AI tooling has maybe close to doubled the time I spend on PR reviews. More knowledgeable developers do better with these tools but they also fall for the toolings false confidence from time to time.
I worry people are spending less time reading documentation or stepping through code to see how it works out of fear that “other people” are more productive.