← Back to context

Comment by moi2388

2 days ago

I completely agree.

On a side note.. ya’ll must be prompt wizards if you can actually use the LLM code.

I use it for debugging sometimes to get an idea, or a quick sketch up of an UI.

As for actual code.. the code it writes is a huge mess of spaghetti code, overly verbose, with serious performance and security risks, and complete misunderstanding of pretty much every design pattern I give it..

I read AI coding negativity on Hacker News and Reddit with more and more astonishment every day. It's like we live in different worlds. I expect the breadth of tooling is partly responsible. What it means to you to "use the LLM code" could be very different from what it means to me. What LLM are we talking about? What context does it have? What IDE are you using?

Personally, I wrote 200K lines of my B2B SaaS before agentic coding came around. With Sonnet 4 in Agent mode, I'd say I now write maybe 20% of the ongoing code from day to day, perhaps less. Interactive Sonnet in VS Code and GitHub Copilot Agents (autonomous agents running on GitHub's servers) do the other 80%. The more I document in Markdown, the higher that percentage becomes. I then carefully review and test.

  • > B2B SaaS

    Perhaps that's part of it.

    People here work on all kinds of industries. Some of us are implementing JIT compilers, mission-critical embedded systems or distributed databases. In code bases like this you can't just wing it without breaking a million things, so LLM agents tend to perform really poorly.

    • > People here work on all kinds of industries.

      Yes, it would be nice to have a lot more context (pun intended) when people post how many LoC they introduced.

      B2B SaaS? Then can I assume that a browser is involved and that a big part of that 200k LoC is the verbose styling DSL we all use? On the other hand, Nginx, a production-grade web server, is 250k LoC (251,232 to be exact [1]). These two things are not comparable.

      The point being that, as I'm sure we all agree, LoC is not a helpful metric for comparison without more context, and different projects have vastly different amounts of information/feature density per LoC.

      [1] https://openhub.net/p/nginx

      8 replies →

    • On the other hand, fault-intolerant codebases are also often highly defined and almost always have rigorous automated tests already, which are two contexts where coding agents specifically excel in.

    • > In code bases like this you can't just wing it without breaking a million things, so LLM agents tend to perform really poorly.

      This is a false premise. LLMs themselves don't force you to introduce breaking changes into your code.

      In fact, the inception of coding agents was lauded as a major improvement to the developer experience because they allow the LLMs themselves to automatically react to feedback from test suites, thus speeding up how code was implemented while preventing regressions.

      If tweaking your code can result in breaking a million things, this is a problem with your code and how you worked to make it resilient. LLMs are only able to introduce regressions if your automated tests are unable to catch any of these million of things breaking. If this is the case then your problems are far greater than LLMs existing, and at best LLMs only point out the elephant in the room.

  • Perhaps the issue is you were used to writing 200k lines of code. Most engineers would be agast at that. Lines of code is a debit not a credit

    • I am now making an emotional reaction based on zero knowledge of the B2B codebase's environment, but to be honest I think it is relevant to the discussion on why people are "worlds apart".

      200k lines of code is a failure state. At this point you have lost control and can only make changes to the codebase through immense effort, and not at a tolerable pace.

      Agentic code writers are good at giving you this size of mess and at helping to shovel stuff around to make changes that are hard for humans due to the unusable state of the codebase.

      If overgrown barely manageble codebases are all a person's ever known and they think it's normal that changes are hard and time-consuming and needing reams of code, I understand that they believe AI agents are useful as code writers. I think they do not have the foundation to tell mediocre from good code.

      I am extremely aware of the judgemental hubris of this comment. I'd not normally huff my own farts in public this obnoxiously, but I honestly feel it is useful for the "AI hater vs AI sucker" discussion to be honest about this type of emotion.

      6 replies →

  • It's interesting how LLM enthusiasts will point to problems like IDE, context, model etc. but not the one thing that really matters:

    Which problem are you trying to solve?

    At this point my assumption is they learned that talking about this question will very quickly reveal that "the great things I use LLMs for" are actually personal throwaway pieces, not to be extended above triviality or maintained over longer than a year. Which, I guess, doesn't make for a great sales pitch.

    • It's amazing to make small custom apps and scripts, and they're such high quality (compared to what I would half-ass write and never finish/polish them) that they don't end up as "throwaway", I keep using them all the time. The LLM is saving me time to write these small programs, and the small programs boost my productivity.

      Often, I will solve a problem in a crappy single-file script, then feed it to Claude and ask to turn it into a proper GUI/TUI/CLI, add CI/CD workflows, a README, etc...

      I was very skeptical and reluctant of LLM assisted coding (you can look at my history) until I actually tried it last month. Now I am sold.

    • At work I need often smaller, short lived scripts to find this or that insight, or to use visualization to render some data and I find LLMs very useful at that.

      A non coding topic, but recently I had difficulty articulating a summarized state of a complex project, so I spoke 2 min in the microphone and it gave me a pretty good list of accomplishments, todos and open points.

      Some colleagues have found them useful for modernizing dependencies of micro services or to help getting a head start on unit test coverage for web apps. All kinds of grunt work that’s not really complex but just really moves quite some text.

      I agree it’s not life changing, but a nice help when needed.

    • I use it to do all the things that I couldn't be bothered to do before. Generate documentation, dump and transform data for one off analyses, write comprehensive tests, create reports. I don't use it for writing real production code unless the task is very constrained with good test coverage, and when I do it's usually to fix small but tedious bugs that were never going to get prioritized otherwise.

  • There is definitely a divide in users - those for which it works and those for which it doesn't. I suspect it comes down to what language and what tooling you use. People doing web-related or python work seem to be doing much better than people doing embedded C or C++. Similarly doing C++ in a popular framework like QT also yields better results. When the system design is not pre-defined or rigid like in QT, then you get completely unmaintainable code as a result.

    If you are writing code that is/can be "heavily borrowed" - things that have complete examples on Github, then an LLM is perfect.

    • While I agree that AI assisted coding probably works much better for languages and use cases that have a lot more relevant training data, when I read comments from people who like LLM assisted coding vs. those that don't, I strongly get the impression that the difference has a lot more to do with the programmers than their programming language.

      The primary difference I see in people who get the most value from AI tools is that they expect it to make mistakes: they always carefully review the code and are fine with acting, in some cases, more like an editor than an author. They also seem to have a good sense of where AI can add a lot of value (implementing well-defined functions, writing tests, etc.) vs. where it tends to fall over (e.g. tasks where large scale context is required). Those who can't seem to get value from AI tools seem (at least to me) less tolerant of AI mistakes, and less willing to iterate with AI agents, and they seem more willing to "throw the baby out with the bathwater", i.e. fixate on some of the failure cases but then not willing to just limit usage to cases where AI does a better job.

      To be clear, I'm not saying one is necessarily "better" than the other, just that the reason for the dichotomy has a lot more to do with the programmers than the domain. For me personally, while I get a lot of value in AI coding, I also find that I don't enjoy the "editing" aspect as much as the "authoring" aspect.

      10 replies →

    • > If you are writing code that is/can be "heavily borrowed" - things that have complete examples on Github, then an LLM is perfect.

      I agree with the general premise. There is however more to it than "heavily borrowed". The degree to which a code base is organized and structured and curated plays as big of a role as what framework you use.

      If your project is a huge pile of unmaintainable and buggy spaghetti code then don't expect a LLM to do well. If your codebase is well structured, clear, and follows patterns systematically the of course a glorified pattern matching service will do far better in outputting acceptable results.

      There is a reason why one of the most basic vibecoding guidelines is to include a prompt cycle to clean up and refactor code between introducing new features. LLMs fare much better when the project in their context is in line with their training. If you refactor your project to align it with what a LLM is trained to handle, it will do much better when prompted to fill in the gaps. This goes way beyond being "heavily borrowed".

      I don't expect your average developer struggling with LLMs to acknowledge this fact, because then they would need to explain why their work is unintelligible to a system trained on vast volumes of code. Garbage in, garbage out. But who exactly created all the garbage going in?

    • > When the system design is not pre-defined or rigid like

      Why would a LLM be any worse building from language fundamentals (which it knows, in ~every language)? Given how new this paradigm is the far more obvious and likely explanation seems to be: LLM powered coding requires somewhat different skills and strategies. The success of each user heavily depends on their learning rate.

    • I suspect it comes down to how novel the code you are writing is and how tolerant of bugs you are.

      People who use it to create a proof of concept of something that is in the LLM training set will have a wildly different experience to somebody writing novel production code.

      Even there the people who rave the most rave about how well it does boilerplate.

    • I think there are still lots of code “artisans” who are completely dogmatic about what code should look like, once the tunnel vision goes and you realise the code just enables the business it all of a sudden becomes a velocity God send.

      23 replies →

  • And also ask: "How much money do you spend on LLMs?"

    In the long run, that is going to be what drives their quality. At some point the conversation is going to evolve from whether or not AI-assisted coding works to what the price point is to get the quality you need, and whether or not that price matches its value.

  • I deal with a few code bases at work and the quality differs a lot between projects and frameworks.

    We have 1-2 small python services based on Flask and Pydantic, very structured and a well-written development and extension guide. The newer Copilot models perform very well with this, and improving the dev guidelines keep making it better. Very nice.

    We also have a central configuration of applications in the infrastructure and what systems they need. A lot of similarly shaped JSON files, now with a well-documented JSON schema (which is nice to have anyway). Again, very high quality. Someone recently joked we should throw these service requests at a model and let it create PRs to review.

    But currently I'm working in Vector and it's Vector remap language... it's enough of a mess that I'm faster working without any copilot "assistance". I think the main issue is that there is very little VRL code out in the open, and the remaps depend on a lot of unseen context, which one would have to work on giving to the LLM. Had similar experiences with OPA and a few more of these DSLs.

  • > It's like we live in different worlds.

    There is the huge variance in prompt specificity as well as the subtle differences inherent to the models. People often don't give examples when they talk about their experiences with AI so it's hard to get a read on what a good prompt looks like for a given model or even what a good workflow is for getting useful code out of it.

    • Some gave. Some even recorded it, and showed it, because they thought that they are good with it. But they weren’t good at all.

      They were slower than coding by hand, if you wanted to keep quality. Some were almost as quick as copy-pasting from the code just above the generated one, but their quality was worse. They even kept some bugs in the code during their reviews.

      So the different world is probably what the acceptable level of quality means. I know a lot of coders who don’t give a shit whether it makes sense what they’re doing. What their bad solution will cause in the long run. They ignore everything else, just the “done” state next to their tasks in Jira. They will never solve complex bugs, they simply don’t care enough. At a lot of places, they are the majority. For them, LLM can be an improvement.

      Claude Code the other day made a test for me, which mocked everything out from the live code. Everything was green, everything was good. On paper. A lot of people simply wouldn’t care to even review properly. That thing can generate a few thousands of lines of semi usable code per hour. It’s not built to review it properly. Serena MCP for example specifically built to not review what it does. It’s stated by their creators.

      7 replies →

    • I'm convinced that for coding we will have to use some sort of TDD or enhanced requirement framework to get the best code. Even on human made systems the quality is highly dependent on the specificity of the requirements and the engineer's ability to probe the edgecases. Something like writing all the tests first (even in something like cucumber) and having the LLM write code to get them to pass would likely produce better code evene though most devs hate the test-first paradigm.

  • My AI experience has varied wildly depending on the problem I'm working on. For web apps in Python, they're fantastic. For hacking on old engineering calculation code written in C/C++, it's an unmitigated disaster and an active hindrance.

    • Just last week I asked copilot to make a FastCGI client in C. It gave me 5 times code that did not compile. Afer some massaging I got it to compile, didn’t work. After some changes, works. No I say “i do not want to use libfcgi, just want a simple implementation”. After already one hour wrestling, I realize the whole thing blocks, I want no blocking calls… still half an hour later fighting, I’m slowly getting there. I see the code: a total mess.

      I deleted all, wrote from scratch a 350 lines file which wotks.

      4 replies →

  • It’s not just you, I think some engineers benefit a lot from AI and some don’t. It’s probably a combination of factors including: AI skepticism, mental rigidity, how popular the tech stack is, and type of engineering. Some problems are going to be very straightforward.

    I also think it’s that people don’t know how to use the tool very well. In my experience I don’t guide it to do any kind of software pattern or ideology. I think that just confuses the tool. I give it very little detail and have it do tasks that are evident from the code base.

    Sometimes I ask it to do rather large tasks and occasionally the output is like 80% of the way there and I can fix it up until it’s useful.

    • Yah. Latest thing I wrote was

      * Code using sympy to generate math problems testing different skills for students, with difficulty values affecting what kinds of things are selected, and various transforms to problems possible (e.g. having to solve for z+4 of 4a+b instead of x) to test different subskills

      (On this part, the LLM did pretty well. The code was correct after a couple of quick iterations, and the base classes and end-use interfaces are correct. There's a few things in the middle that are unnecessarily "superstitious" and check for conditions that can't happen, and so I need to work with the LLM to clean it up.

      * Code to use IRT to estimate the probability that students have each skill and to request problems with appropriate combinations of skills and difficulties for each student.

      (This was somewhat garbage. Good database & backend, but the interface to use it was not nice and it kind of contaminated things).

      * Code to recognize QR codes in the corners of worksheet, find answer boxes, and feed the image to ChatGPT to determine whether the scribble in the box is the answer in the correct form.

      (This was 100%, first time. I adjusted the prompt it chose to better clarify my intent in borderline cases).

      The output was, overall, pretty similar to what I'd get from a junior engineer under my supervision-- a bit wacky in places that aren't quite worth fixing, a little bit of technical debt, a couple of things more clever that I didn't expect myself, etc. But I did all of this in three hours and $12 expended.

      The total time supervising it was probably similar to the amount of time spent supervising the junior engineer... but the LLM turns things around quick enough that I don't need to context switch.

      1 reply →

    • > I also think it’s that people don’t know how to use the tool very well.

      I think this is very important. You have to look at what it suggests critically, and take what makes sense. The original comment was absolutely correct that AI-generated code is way too verbose and disconnected from the realities of the application and large-scale software design, but there can be kernels of good ideas in its output.

    • I think a lot of it is tool familiarity. I can do a lot with Cursor but frankly I find out about "big" new stuff every day like agents.md. If I wasn't paying attention or also able to use Cursor at home then I'd probably learn more inefficiently. Learning how to use rule globs versus project instructions was a big learning moment. As I did more LLM work on our internal tools that was also a big lesson in prompting and compaction.

      Certain parts of HN and Reddit I think are very invested in nay-saying because it threatens their livelihoods or sense of self. A lot of these folks have identities that are very tied up in being craftful coders rather than business problem solvers.

  • I think its down to language and domain more than tools.

    No model ive tried can write, usefully debug or even explain cmake. (It invents new syntax if it gets stuck, i often have to prompt multiple AI to know if even the first response in the context was made-up)

    My luck with embedded c has been atrocious for existing codebase (burning millions of tolkens), but passable for small scripts. (Arduino projects)

    My experience with python is much better. Suggesting relevant libraries and functions, debugging odd errors, or even making small script on its own. Even the original github copilot which i got access to early was excellent on python.

    Alot of people that seem to have fully embraced agentic vibe-coding seem to be in the web or node.js domain. Which I've not done myself since pre-AI.

    I've tried most (free or trial) major models or schemes in hope that i find any of them useful, but not found much use yet.

  • > It's like we live in different worlds.

    We probably do, yes. the Web domain compared to a cybersecurity firm compared to embedded will have very different experiences. Because clearly there's a lot more code to train on for one domain than the other (for obvious reasons). You can have colleagues at the same company or even same team have drastically different experiences because they might be in the weeds on a different part of tech.

    > I then carefully review and test.

    If most people did this, I would have 90% less issues with AI. But as we expect, people see shortcuts and use them to cut corners, not give more times to polish the edges.

  • What tech stack do you use?

    Betting in advance that it's JavaScript or Python, probably with very mainstream libraries or frameworks.

    • FWIW. Claude Code does great job for me on complex domain Rust projects, but I just use it one relatively small feature/code chunk at the time, where oftentimes it can pick up existing patterns etc. (I try to point it at similar existing code/feature if I have it). I do not let it write anything creative where it has to come up with own design (either high-level architectural, or low level facilities). Basically I draw the lines manually, and let it color the space between, using existing reference pictures. Works very, very well for me.

    • Is this meant to detract from their situation? These tech stacks are mainstream because so many use them... it's only natural that AI would be the best at writing code in contexts where it has the most available training data.

      2 replies →

    • You are exactly right in my case - JavaScript and Python dealing with the AWS CDK and SDK. Where there is plenty of documentation and code samples.

      Even when it occasionally gets it wrong, it’s just a matter of telling ChatGPT - “verify your code using the official documentation”.

      But honestly, even before LLMs when deciding on which technology, service, or frameworks to use I would always go with the most popular ones because they are the easiest to hire for, easiest to find documentation and answers for and when I myself was looking for a job, easiest to be the perfect match for the most jobs.

      6 replies →

  • As a practical example, I've recently tried out v0's new updated systems to scaffold a very simple UI where I can upload screenshots from videogames I took and tag them.

    The resulting code included an API call to run arbitrary SQL queries against the DB. Even after pointing this out, this API call was not removed or at least secured with authentication rules but instead /just/hidden/through/obscur/paths...

  • It could be the language. Almost 100% of my code is written by AI, I do supervise as it creates and steer in the right direction. I configure the code agents with examples of all frameworks Im using. My choice of Rust might be disproportionately providing better results, because cargo, the expected code structure, examples, docs, and error messages, are so well thought out in Rust, that the coding agents can really get very far. I work on 2-3 projects at once, cycling through them supervising their work. Most of my work is simulation, physics and complex robotics frameworks. It works for me.

  • I agree, it's like they looked at GPT 3.5 one time and said "this isn't for me"

    The big 3 - Opus 4.1 GPT5 High, Gemini 2.5 Pro

    Are astonishing in their capabilities, it's just a matter of providing the right context and instructions.

    Basically, "you're holding it wrong"

  • Do you not think part of it is just whether employers permit it or not? My conglomerate employer took a long time to get started and has only just rolled out agent mode in GH Copilot, but even that is in some reduced/restricted mode vs the public one. At the same time we have access to lots of models via an internal portal.

    • Companies that don't allow their devs to use LLMs will go bankrupt and in the meantime their employees will try to use their private LLM accounts.

  • I am also constantly astonished.

    That said, observing attempts by skeptics to “unsuccessfully” prompt an LLM have been illuminating.

    My reaction is usually either:

    - I would never have asked that kind of question in the first place.

    - The output you claim is useless looks very useful to me.

  • B2B SaaS in most cases are sophisticated masks over some structured data, perhaps with great ux, automation and convenience, so I can see LLMs be more successful there, even so because there is more training data and many processes are streamlined. Not all domains are equal, go try develop a serious game, not the yet another simple and broken arcade, with llms and you'll have a different take

  • I think people react to AI with strong emotions, which can come from many places, anxiety/uncertainty about the future being a common one, strong dislike of change being another (especially amongst autists, whom I would guess based on me and my friend circle are quite common around here). Maybe it explains a lot of the spicy hot-takes you see here and on lobsters? People are unwilling to think clearly or argue in good faith when they are emotionally charged (see any political discussion). You basically need to ignore any extremist takes entirely, both positive and negative, to get a pulse on what's going on.

    If you look, there are people out there approaching this stuff with more objectivity than most (mitsuhiko and simonw come to mind, have a look through their blogs, it's a goldmine of information about LLM-based systems).

  • It really depends, and can be variable, and this can be frustrating.

    Yes, I’ve produced thousands of lines of good code with an LLM.

    And also yes, yesterday I wasted over an hour trying to define a single docker service block for my docker-compose setup. Constant hallucination, eventually had to cross check everything and discover it had no idea what it was doing.

    I’ve been doing this long enough to be a decent prompt engineer. Continuous vigilance is required, which can sometimes be tiring.

  • GitHub copilot, Microsoft copilot, Gemini, loveable, gpt, cursor with Claude models, you name it.

  • Lines of code is not a useful metric for anything. Especially not productivity.

    The less code I write to solve a problem the happier I am.

  • It could be because your job is boilerplate derivatives of well solved problems. Enjoy the next 1 to 2 years because yours is the job Claude is coming to replace.

    Stuff Wordpress templates should have solved 5 years ago.

  • Honestly the best way to get good code at least with typescript and JavaScript is to have like 50 eslint plugins

    That way it constantly yells at sonnet 4 to get the code at least in a better state.

    If anyone is curious I have a massive eslint config for typescript that really gets good code out of sonnet.

    But before I started doing this the code it wrote was so buggy and it was constantly trying to duplicate functions into separate files etc

  • [flagged]

    • This is such a bizarre comment. You have no idea what code base they are talking about, their skill level, or anything.

    • > I'm struggling to even describe... 200,000 lines of code is so much.

      The point about increasing levels of abstractions is a really good one, and it's worth considering whether any new code that's added is entirely new functionality, some kind of abstraction over some existing functionality (that might then reduce the need for as new code), or (for good or bad reason) some kind of copy of some of the existing behaviour but re-purposed for a different use case.

    • 200kloc is what, 4 reams of paper, double sided? So, 10% of that famous Margaret Hamilton picture (which is roughly "two spaceships worth of flight code".) I'm not sure the intuition that gives you is good but at least it slots the raw amount in as "big but not crazy big" (the "9 years work" rather than "weekend project" measurement elsethread also helps with that.)

I agree. AI is a wonderful tool for making fuzzy queries on vast amounts of information. More and more I'm finding that Kagi's Assistant is my first stop before an actual search. It may help inform me about vocabulary I'm lacking which I can then go successfully comb more pages with until I find what I need.

But I have not yet been able to consistently get value out of vibe coding. It's great for one-off tasks. I use it to create matplotlib charts just by telling it what I want and showing it the schema of the data I have. It nails that about 90% of the time. I have it spit out close-ended shell scripts, like recently I had it write me a small CLI tool to organize my Raw photos into a directory structure I want by reading the EXIF data and sorting the images accordingly. It's great for this stuff.

But anything bigger it seems to do useless crap. Creates data models that already exist in the project. Makes unrelated changes. Hallucinates API functions that don't exist. It's just not worth it to me to have to check its work. By the time I've done that, I could have written it myself, and writing the code is usually the most pleasurable part of the job to me.

I think the way I'm finding LLMs to be useful is that they are a brilliant interface to query with, but I have not yet seen any use cases I like where the output is saved, directly incorporated into work, or presented to another human that did not do the prompting.

  • Have you tried Opus? It's what got me past using LLMs only marginally. Standard disclaimers apply in that you need to know what it's good for and guide it well, but there's no doubt at this point it's a huge productivity boost, even if you have high standards - you just have to tell it what those standards are sometimes.

    • Opus was also the threshold for me where I started getting real value out of (correctly applied) LLMs for coding.

I just had Claude Sonnet 4 build this for me: https://github.com/kstenerud/orb-serde

Using the following prompt:

    Write a rust serde implementation for the ORB binary data format.

    Here is the background information you need:

    * The ORB reference material is here: https://github.com/kstenerud/orb/blob/main/orb.md
    * The formal grammar dscribing ORB is here: https://github.com/kstenerud/orb/blob/main/orb.dogma
    * The formal grammar used to describe ORB is called Dogma.
    * Dogma reference material is here: https://github.com/kstenerud/dogma/blob/master/v1/dogma_v1.0.md
    * The end of the Dogma description document has a section called "Dogma described as Dogma", which contains the formal grammar describing Dogma.

    Other important things to remember:

    * ORB is an extension of BONJSON, so it must also implement all of BONJSON.
    * The BONJSON reference material is here: https://github.com/kstenerud/bonjson/blob/main/bonjson.md
    * The formal grammar desribing BONJSON is here: https://github.com/kstenerud/bonjson/blob/main/bonjson.dogma

Is it perfect? Nope, but it's 90% of the way there. It would have taken me all day to build all of these ceremonious bits, and Claude did it in 10 minutes. Now I can concentrate on the important parts.

  • First and foremost, it’s 404. Probably a mistake, but I chuckled a bit when someone says "AI build this thing and it’s 90% there" and then posts a dead link.

    • Weird... For some reason Github decided that this time my repo should default to private.

What tooling are you using?

I use aider and your description doesn't match my experience, even with a relatively bad-at-coding model (gpt-5). It does actually work and it does generate "good" code - it even matches the style of the existing code.

Prompting is very important, and in an existing code base the success rate is immensely higher if you can hint at a specific implementation - i.e. something a senior who is familiar with the codebase somewhat can do, but a junior may struggle with.

It's important to be clear eyed about where we are here. I think overall I am still faster doing things manually than iterating with aider on an existing code base, but the margin is not very much, and it's only going to get better.

Even though it can do some work a junior could do, it can't ever replace a junior human... because a junior human also goes to meetings, drives discussions, and eventually becomes a senior! But management may not care about that fact.

The one thing I've found AI is good at is parsing through the hundreds of ad ridden, barely usable websites for answers to my questions. I use the Duck Duck Go AI a lot to answer questions. I trust it about as far as I can throw the datacenter it resides in, but it's useful for quickly verifiable things. Especially stuff like syntax and command line options for various programs.

  • > The one thing I've found AI is good at is parsing through the hundreds of ad ridden, barely usable websites for answers to my questions.

    One thing I can guarantee you is that this won't last. No sane MBA will ignore that revenue stream.

    Image hosting services, all over again.

    • You are entirely correct. The enshittification will continue. All we can do is enjoy these things while they are still usable.

      4 replies →

    • The difference, of course, is that most AI companies don't have the malicious motive that Google has by also being an ad company.

      4 replies →

It's one of those you get what you put in kind of deals.

If you spend a lot of time thinking about what you want, describing the inner workings, edge cases, architecture and library choices, and put that into a thoughtful markdown, then maybe after a couple of iterations you will get half decent code. It certainly makes a difference between that and a short "implement X" prompt.

But it makes one think - at that point (writing a good prompt that is basically a spec), you've basically solved the problem already. So LLM in this case is little more than a glorified electric typewriter. It types faster than you, but you did most of the thinking.

  • Right, and then after you do all the thinking and the specs, you have to read and understand and own every single line it generated. And speaking for myself, I am no where near as good at thinking through code I am reviewing as thinking through the code I am writing.

    Other people will put up PRs full of code they don't understand. I'm not saying everyone who is reporting success with LLMs are doing that, but I hear it a lot. I call those people clowns, and I'd fire anyone who did that.

    • If it passes the unit tests I make it write and works for my sample manual cases I absolutely will not spend time reading the implementation details unless and until something comes up. Sometimes garbage makes its way into git but working code is better than no code and the mess can be cleaned up later. If you have correctness at the interface and function level you can get a lot done quickly. Technical debt is going to come out somewhere no matter what you do.

      7 replies →

I’ve built 2 SaaS applications with LLM coding one of which was expanded and release to enterprise customers and is in good use today - note I’ve got years of dev experience and I follow context and documentation prompts and I’m using common LLM languages like typescript and python and react and AWS infra

Now it requires me to fully review all code and understand what the LLM is doing at the functional, class level and api level- in fact it works better at the method or component level for me and I had a lot of cleanup work (and lots of frustration with the models) on the codebase but overall there’s no way that I could equal the velocity I have now without it

  • I think the other important step is to reject code your engineers submit that they can't explain for a large enterprise saas with millions of lines of code. I myself reject I'd say 30% of the code the LLMs generate but the power is in being able to stay focused on larger problems while rapidly implementing smaller accessory functions that enable that continued work without stopping to add another engineer to the task.

    I've definitely 2-4X'd depending on the task. For small tasks I've definitely 20X'd myself for some features or bugfixes.

I do frontend work (React/TypeScript). I barely write my own code anymore, aside from CSS (the LLMs have no aesthetic sensibilities). Just prompting with Gemini 2.5 Pro. Sometimes Sonnet 4.

I don't know what to tell you. I just talk to the thing in plain but very specific English and it generally does what I want. Sometimes it will do stupid things, but then I either steer it back in the direction I want or just do it myself if I have to.

I agree with the article but also believe LLM coding can boost my productivity and ability to write code over long stretches. Sure getting it to write a whole feature, high opportunity of risk. But getting it to build out a simple api with examples above and below it, piece of cake, takes a few seconds and would have taken me a few minutes.

I think it has a lot to do with skill level. Lower skilled developers seem to feel it gives them a lot of benefit. Higher skilled developers just get frustrated looking at all the errors in produces.

The bigger the task, the more messy it'll get. GPT5 can write a single UI component for me no problem. A new endpoint? If it's simple, no problem. The risk increases as the complexity of the task does.

  • I break complex task down into simple tasks when using ChatGPT just like I did before ChatGPT with modular design.

The AI agents tend to fail for me with open ended or complex tasks requiring multiple steps. But I’ve found it massively helpful if you have these two things: 1) a typed language… better if strongly typed 2) your program is logically structured and follows best practices and has hierarchical composition.

The agents are able to iterate and work with the compiler until it gets it right and the combination of 1 and 2 means there’s fewer possible “right answers” to whatever problem I have. If i structure my prompte to basically fill in the blanks of my code in specific areas it saves a lot of time. Most of what I prompt is something already done, and usually 1 google search away. This saves me the time to search it up, figure out whatever syntax I need, etc.

AI is really good at writing tests.

AI is also pretty good if you get it to do small chunks of code for you. This means you come with the architecture, the implementation details, and how each piece is structured. When I walk AI through each unit of code I find the results are better, and it's easier for me to address issues as I progress.

This may seem some what redundant, though. Sometimes it's faster to just do it yourself. But, with a toddler who hates sleep I've found I've been able to maintain my velocity... Even on days I get 3 hrs of sleep.

I don't code every day and am not an expert. Supposedly the sort of casual coder that LLMs are supposed to elevate into senior engineers.

Even I can see they have big blind spots. As the parent said I get overly verbose code that does run, but is no where near the best solution. Well, for really common problems and patterns I usually get a good answer. Need a more niche problem solved?You better brush up your Googling skills and do some research if you care about code quality.

If you actually believe this, you're either using bad models or just terrible at prompting and giving proper context. Let me know if you need help, I use generated code in every corner of my computer every day

My favourite code smell that LLMs love to introduce is redundant code comments.

// assign "bar" to foo

const foo = "bar";

They love to do that shit. I know you can prompt it not to. But the amount of PRs I'm reviewing these days that have those types of comments is insane.

The code LLMs write is much better than mine. Way less shortcuts and spaghetti. Maybe that means that I am a lousy coder but the end result is still better.

I see LLM coding as hinting on steroids. I don't trust it to actually write all of my code, but sometimes it can get me started, like a template.

I haven't had that experience, but I tend to keep my prompts very focused with a tightly limited scope. Put a different way, if I had a junior or mid level developer, and I wanted them to create a single-purpose class of 100-200 lines at most, that's how I write my prompts.

Likewise with Powershell. It's good to give you an approach or some ideas, but copy/paste fails about 80% of the time.

Granted, I may be a inexpert prompter, but at the same time, I'm asking for basic things, as a test, and it just fails miserably most of the time.

I've been pondering this for a while. I think there's an element of dopamine that LLMs bring to the table. They probably don't make a competent senior engineer much more productive if at all, but there's that element of chance that we don't get a lot of in this line of work.

I think a lot of us eventually arrive at a point where our jobs get a bit boring and all the work starts to look like some permutation of past work. If instead of going to work and spending two hours adding some database fields and writing some tests, you had the opportunity to either:

A) Do the thing as usual in the predictable two hours

B) Spend an hour writing a detailed prompt as if you were instructing a junior engineer on a PIP to do it, and doing all the typical cognitive work you'd have done normally and then some, but then instead of typing out the code in the next hour, you have a random chance to either press enter, and tada the code has been typed and even kinda sorta works, after this computer program was "flibbertigibbeting" for just 10 minutes. Wow!

Then you get that sweet dopamine hit that tells you you're a really smart prompt engineer who did a two hour task in... cough 10 minutes. You enjoy your high for a bit, maybe go chat with some subordinate about how great your CLAUDE.md was and if they're not sure about this AI thing it's just because they're bad at prompt engineering.

Then all you have to do is cross your t's and dot your i's and it's smooth sailing from there.Except, it's not. Because you (or another engineer) will probably find architectural/style issues when reviewing the code that you explicitly told it to follow, but it ignored, and you'll have to fix those. You'll also probably be sobering up from your dopamine rush by now, and realize that you have to either review all the other lines of AI generated code, which you could have just correctly typed once.

But now you have to review with an added degree of scrutny, because you know it's really good at writing text that looks beautiful, but is ever so slightly wrong in ways that might even slip through code review and cause the company to end up in the news.

Alternatively, you could yolo and put up an MR after a quick smell, making some other poor engineer do your job for you (you're a 10x now, you've got better things to do anyway). Or better yet, just have Claude write the MR, and don't even bother to read it. Surely nobody's going to notice your "acceptance critera" section says to make sure the changes have been tested on both Android and Apple, even though you're building a microservice for an AI-powered smart fridge (mostly just a fridge, except every now and then it starts shooting ice cubes across the room at mach 3). Then three months later when someone, who never realized there are three different identical "authenticate," spends an hour scratching their head about why the code they're writing is not doing anything (because it's actually running another redundant function that nobody ever seems to catch in MR review because they're not reflected in a diff.

But yeah, that 10 minute AI magic trick sure felt good. There are times when work is dull enough that option B sounds pretty good, and I'll dabble. But yeah, I'm not sure where this AI stuff leads but I'm pretty confident it won't taking over our jobs any time soon (an ever-increasing quota of H1Bs and STEM opt student visas working for 30% less pay, on the other hand, might).

It's just that being the dumbest thing we ever heard still doesn't stop some people from doing it anyway. And that goes for many kinds of LLM application.

I hate to admit it, but it is the prompt (call it context if ya like, includes tools). Model is important, window/tokensz are important, but direction wins. Also codebase is important, greenfield gets much better results, so much so that we may throw away 40 years of wisdom designed to help humans code amongst each other and use design patterns that will disgust us.

Could the quality of your prompt be related to our differing outcome? I have decades of pre-AI experience and I use AI heavily. If I let it go off on its own its not as good as constraining and hand-holding it.

> ya’ll must be prompt wizards

Thank you, but I don’t feel that way.

I’d ask you a lot of details…what tool, what model, what kind of code. But it’d probably take a lot to get to the bottom of the issue.

Not only a prompt wizard, you need to know what prompts are bad or good and also use bad/lazy prompts to your advantage

Sounds like you are using it entirely wrong then...

Just yesterday I uploaded a few files of my code (each about 3000+ lines) into a gpt5 project and asked in assistance in changing a lot of database calls into a caching system, and it proceeded to create a full 500 line file with all the caching objects and functions I needed. Then we went section through section of the main 3000+ line file to change parts of the database queries into the cached version. [I didn't even really need to do this, it basically detected everything I would need changing at once and gave me most of it, but I wanted to do it in smaller chunks so I was sure what was going on]

Could I have done this without AI? Sure.. but this was basically like having a second pair of eyes and validating what I'm doing. And saving me a bunch of time so I'm not writing everything from scratch. I have the base template of what I need then I can improve it from there.

All the code it wrote was perfectly clean.. and this is not a one off, I've been using it daily for the last year for everything. It almost completely replaces my need to have a junior developer helping me.

  • You mean like it turned on Hibernate or it wrote some custom rolled in app cache layer?

    I usually find these kinds of caching solutions to be extremely complicated (well the cache invalidating part) and I'm a bit curious what approach it took.

    You mention it only updated a single file so I guess it's not using any updates to the session handling so either sticky sessions are not assumed or something else is going on. So then how do you invalidate the app level cache for a user across all machine instances? I have a lot of trauma from the old web days of people figuring this out so I'm really curious to hear about how this AI one shot it in a single file.

    • This is C# so basically just automatically detected that I had 4 object types I was working with that were being updated to the database that I want to keep in a concurrent dictionary type of cache. So it created the dictionaries for each object with the appropriate keys, created functions for each object type if I touch an object to get that one updated etc.

      It created the function to load in the data, then the finalize where it writes to the DB what was touched and clears the cache.

      Again- I'm not saying this is anything particularly fancy, but it did the general concept of what I wanted. Also this is all iterative; when it creates something I talk to it like a person to say "hey I want to actually load in all the data, even though we will only be writing what changed" and all that kind of stuff.

      Also the bigger help wasn't really the creation of the cache, it was helping to make the changes and detect what needed to be modified.

      End of the day even if I want to go a slightly different route of how it did the caching; it creates all the framework so I can simplify if needed.

      A lot of times for me using this LLM approach is to get all the boilerplate out of the way.. sometimes just starting the process by yourself of something is daunting. I find this to be a great way to begin.

  • I know, I don't understand what problems people are having with getting usable code. Maybe the models don't work well with certain languages? Works great with C++. I've gotten thousands of lines of clean compiling on the first try and obviously correct code from ChatGPT, Gemini, and Claude.

    I've been assuming the people who are having issues are junior devs, who don't know the vocabulary well enough yet to steer these things in the right direction. I wouldn't say I'm a prompt wizard, but I do understand context and the surface area of the things I'm asking the llm to do.

    • From my experience the further you get from the sort of stuff that easily accessible on Stack Overflow the worse it gets. I've had few problems having an AI write out some minor python scripts, but yield severely poorer results with Unreal C++ code and badly hallucinate nonsense if asked in general anything about Unreal architecture and API.

      5 replies →

  • How large is that code-base overall? Would you be able to let the LLM look at the entirety of it without it crapping out?

    It definitely sounds nice to go and change a few queries, but did it also consider the potential impacts in other parts of the source or in adjacent running systems? The query itself here might not be the best example, but you get what I mean.