Comment by idopmstuff

9 days ago

I'll start by saying that in my past life I was a PM, and from that angle I can very much see how people writing code for large-scale, production systems take real issue with the quality of what LLMs produce.

But these days I run a one-man business, and LLMs (currently Claude Code, previously GPT) have written me a ton of great code. To be clear, when I say "great" I don't mean up to your standards of code quality; rather, I mean that it does what I need it to do and saves me a bunch of time.

I've got a great internal dashboard that pulls in data from a few places, and right now CC is adding some functionality to a script that does my end of month financial spreadsheet update. I have a script that filters inbound leads (I buy e-commerce brands, generally from marketplaces that send me an inordinate amount of emails that I previously had to wade through myself in order to find the rare gem). On the non-code front, I have a very long prompt that basically does the first pass of analysis of prospective acquisitions, and I use Shortcut.ai to clean up some of the P&Ls I get (I buy small e-commerce brands, so the finances are frequently bad).

So while I can't speak to using LLMs to write code if you're working in any sort of real SaaS business, I can definitely say that there's real, valid code to be had from these things for other uses.

One of my friends did a job for a government. He generated the code for it with some LLM. It provided a result which was about what he thought should be. He - or anybody - never checked the code whether it really calculated what it should have. “It did what [he] needed it to do”. Now the said government started to make decisions based on a result which proved by nobody. In other words, lottery.

What you mentioned doesn’t mean anything until there is no hard proof that it really works. I understand that it seems to you that it works, but I’ve seen enough to know that that means absolutely nothing.

  • Thanks, I can relate to the parent poster, and this is a really profound comment for me. I appreciate the way you framed this. I’ve felt compelled to fact check my own LLM outputs but I can’t possibly keep up with the quantity. And it’s tempting (but seems irrational) to hand the results to a different LLM. My struggle is remembering there needs to be input/query/calculation/logic validation (without getting distracted by all the other shiny new tokens in the result)