Comment by ruszki
9 days ago
I want to see an example of this. Any real example. So far what I’ve seen, I wouldn’t give my name to any of those barely usable code, and most of the time they were even slower, especially when they pretended that they reviewed their code. And even with reviews they happily accepted bad code. This was true even with my friends, not just random examples on the internet.
I still need to understand every single line of the code to be responsible for it, and that takes the majority of time anyway, and quite often I need to rewrite most of it, because average code is not particularly good, because most code wasn’t produced by senior professionals, but random people making a random python script with only Hello World under their belt. So at the end doesn’t really matter whether I copy paste from a source, or an LLM does the same.
I understand that many coder are happy with the “John made his first script in his life” level of code, but I’m paid well because I can do better, way better. Especially because I need to be responsible for my code, because the companies to whom I work are forced to be responsible.
But of course, when there is no responsibility, I don’t care either. For those home projects where there is exactly zero risks. Even big names seem to use these only to those kind of projects. When they don’t really care.
I'll start by saying that in my past life I was a PM, and from that angle I can very much see how people writing code for large-scale, production systems take real issue with the quality of what LLMs produce.
But these days I run a one-man business, and LLMs (currently Claude Code, previously GPT) have written me a ton of great code. To be clear, when I say "great" I don't mean up to your standards of code quality; rather, I mean that it does what I need it to do and saves me a bunch of time.
I've got a great internal dashboard that pulls in data from a few places, and right now CC is adding some functionality to a script that does my end of month financial spreadsheet update. I have a script that filters inbound leads (I buy e-commerce brands, generally from marketplaces that send me an inordinate amount of emails that I previously had to wade through myself in order to find the rare gem). On the non-code front, I have a very long prompt that basically does the first pass of analysis of prospective acquisitions, and I use Shortcut.ai to clean up some of the P&Ls I get (I buy small e-commerce brands, so the finances are frequently bad).
So while I can't speak to using LLMs to write code if you're working in any sort of real SaaS business, I can definitely say that there's real, valid code to be had from these things for other uses.
One of my friends did a job for a government. He generated the code for it with some LLM. It provided a result which was about what he thought should be. He - or anybody - never checked the code whether it really calculated what it should have. “It did what [he] needed it to do”. Now the said government started to make decisions based on a result which proved by nobody. In other words, lottery.
What you mentioned doesn’t mean anything until there is no hard proof that it really works. I understand that it seems to you that it works, but I’ve seen enough to know that that means absolutely nothing.
Thanks, I can relate to the parent poster, and this is a really profound comment for me. I appreciate the way you framed this. I’ve felt compelled to fact check my own LLM outputs but I can’t possibly keep up with the quantity. And it’s tempting (but seems irrational) to hand the results to a different LLM. My struggle is remembering there needs to be input/query/calculation/logic validation (without getting distracted by all the other shiny new tokens in the result)