Comment by exfalso
4 days ago
Mostly feeling like a caveman. I've been trying and failing to use it productively since the start of the hype. The amount of time wasted could've been used for actual development.
I just simply don't get it. Productivity delta is literally negative.
I've been asking to do projects where I thought "oh, maybe this project has a chance of getting an AI productivity boost". Nope. Personal projects all failed as well.
I don't get it. I guess I'm getting old. "Grandpa let me write the prompt, you write it like this".
No, you're not alone
I find it wastes my time more than it helps
Everyone insists I must be using it wrong
I was never arrogant enough to think I'm a superior coder to many people, but AI code is so bad and the experience using it is so tedious that I'm starting to seriously question the skills of anyone who finds themselves more productive using AI for code instead of writing it themselves
Agreed, but I'm also open to the likely possibility that LLMs genuinely work quite well in a few niches that I don't happen to work in, like writing run-of-the-mill React components where open source training data is truly abundant.
In day to day work I could only trust it to help me with the most conventional problems that the average developer experiences in the "top N" most popular programming languages and frameworks, but I don't need help with that because search engines are faster and lead to more trustworthy results.
I turn to LLMs when I have a problem that I can't solve after at least 10 minutes of my own research, which probably means I've strayed off the beaten path a bit. This is where response quality goes down the drain. The LLM now succumbs to hallucinations and bad pattern-matching like disregarding important details, suggesting solutions to superficially similar problems, parroting inapplicable conventional wisdoms, and summarizing the top 5 google search results and calling it "deep research".
> LLMs genuinely work quite well in a few niches that I don't happen to work in, like writing run-of-the-mill React components where open source training data is truly abundant
I write run of the mill React components quite often and this has not been my experience with AI either so I really don't know what gives
Agreed.
perhaps 1% of the time I've asked an LLM to write code for me, has it given me something useful and not taken more time than just writing the thing myself.
It has happened, but those instances are vastly outnumbered by it spewing out garbage that I would be professionally embarrassed to ever commit into a repo, and/or me repeatedly screaming at it "no, dumbass, I already told you why that isn't a solution to the problem"
> I don't get it. I guess I'm getting old. "Grandpa let me write the prompt, you write it like this".
It's not you getting old (although we all are), it's that you are probably already experienced and can produce better and more relevant code than the mid-to-low quality code produced by any LLM even with the best prompting.
Just so we are clear, in the only current actual study measuring productivity of experienced developers using an LLM so far, it actually led to a 19% decline in productivity. So there is a big chance that you are an experienced dev, and the ones that do experience a bump in productivity are the less experienced devs.
https://news.ycombinator.com/item?id=44858641
The current LLM hype reminds me of the Scrum/Agile hype where people could swear that it works for them and it if didn't for you, you were not following some scrum ritual right. It's the same with LLMs, apparently you are not asking nicely enough and giving writing 4000 lines of pseudocode and specs to produce 1 line of well written code. LLM coding is the new Scrum: useful to an extent and in moderation, but once they become a cult, you better not engage and let it die out on its own.
There will be a whole industry of prompting "experts", prompting books, the same as there were different crops of SCRUM, SAFe, and who knows what else. All we can do is sit on the sidelines and laugh.