Comment by LoganDark
18 hours ago
Every time I try to use LLMs for coding, I completely lose touch with what it's doing, it does everything wrong and it can't seem to correct itself no matter how many times I explain. It's so frustrating just trying to get it to do the right thing.
I've resigned to mostly using it for "tip-of-my-tongue" style queries, i.e. "where do I look in the docs". Especially for Apple platforms where almost nothing is documented except for random WWDC video tutorials that lack associated text articles.
I don't trust LLMs at all. Everything they make, I end up rewriting from scratch anyway, because it's always garbage. Even when they give me ideas, they can't apply them properly. They have no standards, no principle. It's all just slop.
I hate this. I hate it because LLMs give so many others the impression of greatness, of speed, and of huge productivity gains. I must look like some grumpy hermit, stuck in their ways. But I just can't get over how LLMs all give me the major ick. Everything that comes out of them feels awful.
My standards must be unreasonably high. Extremely, unsustainably high. That must also be the reason I hardly finish any projects I've ever started, and why I can never seem to hit any deadlines at work. LLMs just can't reach my exacting, uncompromising standards. I'm surely expecting far too much of them. Far too much.
I guess I'll just keep doing it all myself. Anything else really just doesn't sit right.
There's clearly a gap in how or for what LLM-enthusiasts and I would use LLMs. When I've tried it, I've found it just as frustrating as you describe, and it takes away the elements of programming that make it tolerable for me to do. I don't even think I have especially high standards - I can be pretty lazy for anything outside of work.
I don't view LLMs as a substitute for thinking; I view them as an aid to research and study, and as a translator from pseudocode to syntax. That is, instead of trawling through all the documentation myself and double-checking everything manually, an LLM can pop up a solution of some quality, and if that agrees with how my mental model assumes it should work, I'll accept it or improve on it. And if I know what I want to do but don't know some exact syntax, like has happened in Swift recently as I explore macOS development, an LLM can translate my implementation ideas into something that compiles.
More to the point of the article, though, LLM-enthusiasts do seem to view it as a substitute for thinking. They're not augmenting their application of knowledge with shortcuts and fast-paths; they're entirely trusting the LLM to engineer things on its own. LLMs are great at creating the impression that they are suitable for this; after all, they are trained on tons of perfectly reasonable engineering data, and start to show all the same signals that a naïve user would use to tell quality of engineering... just without the quality.