Comment by brushfoot

3 days ago

I read AI coding negativity on Hacker News and Reddit with more and more astonishment every day. It's like we live in different worlds. I expect the breadth of tooling is partly responsible. What it means to you to "use the LLM code" could be very different from what it means to me. What LLM are we talking about? What context does it have? What IDE are you using?

Personally, I wrote 200K lines of my B2B SaaS before agentic coding came around. With Sonnet 4 in Agent mode, I'd say I now write maybe 20% of the ongoing code from day to day, perhaps less. Interactive Sonnet in VS Code and GitHub Copilot Agents (autonomous agents running on GitHub's servers) do the other 80%. The more I document in Markdown, the higher that percentage becomes. I then carefully review and test.

> B2B SaaS

Perhaps that's part of it.

People here work on all kinds of industries. Some of us are implementing JIT compilers, mission-critical embedded systems or distributed databases. In code bases like this you can't just wing it without breaking a million things, so LLM agents tend to perform really poorly.

  • > People here work on all kinds of industries.

    Yes, it would be nice to have a lot more context (pun intended) when people post how many LoC they introduced.

    B2B SaaS? Then can I assume that a browser is involved and that a big part of that 200k LoC is the verbose styling DSL we all use? On the other hand, Nginx, a production-grade web server, is 250k LoC (251,232 to be exact [1]). These two things are not comparable.

    The point being that, as I'm sure we all agree, LoC is not a helpful metric for comparison without more context, and different projects have vastly different amounts of information/feature density per LoC.

    [1] https://openhub.net/p/nginx

    • I primarily work in C# during the day but have been messing around with simple Android TV dev on occasion at night.

      I’ve been blown away sometimes at what Copilot puts out in the context of C#, but using ChatGPT (paid) to get me started on an Android app - totally different experience.

      Stuff like giving me code that’s using a mix of different APIs and sometimes just totally non-existent methods.

      With Copilot I find sometimes it’s brilliant but it’s so random as to when that will be it seems.

      1 reply →

    •   > when people post how many LoC they introduced.
      

      Pretty ironic you and the GP talk about lines of code.

      From the article:

        Garman is also not keen on another idea about AI – measuring its value by what percentage of code it contributes at an organization.
      
        “It’s a silly metric,” he said, because while organizations can use AI to write “infinitely more lines of code” it could be bad code.
      
        “Often times fewer lines of code is way better than more lines of code,” he observed. “So I'm never really sure why that's the exciting metric that people like to brag about.”
      

      I'm with Garman here. There's no clean metric for how productive someone is when writing code. At best, this metric is naive, but usually it is just idiotic.

      Bureaucrats love LoC, commits, and/or Jira tickets because they are easy to measure but here's the truth: to measure the quality of code you have to be capable of producing said code at (approximately) said quality or better. Data isn't just "data" that you can treat as a black box and throw in algorithms. Data requires interpretation and there's no "one size fits all" solution. Data is nothing without its context. It is always biased and if you avoid nuance you'll quickly convince yourself of falsehoods. Even with expertise it is easy to convince yourself of falsehoods. Without expertise it is hopeless. Just go look at Reddit or any corner of the internet where there's armchair experts confidently talking about things they know nothing about. It is always void of nuance and vastly oversimplified. But humans love simplicity. You need to recognize our own biases.

      5 replies →

  • On the other hand, fault-intolerant codebases are also often highly defined and almost always have rigorous automated tests already, which are two contexts where coding agents specifically excel in.

  • I work on brain dead crud apps much of my time and get nothing from LLMs.

    • I found the opposite - I am able to get 50% improvement in productivity for day to day coding (mix of backend, frontend), mostly in Javascript but have helped in other languages. But you have to carefully review though - and have extremely well written test cases if you have to blindly generate or replace existing code.

  • > In code bases like this you can't just wing it without breaking a million things, so LLM agents tend to perform really poorly.

    This is a false premise. LLMs themselves don't force you to introduce breaking changes into your code.

    In fact, the inception of coding agents was lauded as a major improvement to the developer experience because they allow the LLMs themselves to automatically react to feedback from test suites, thus speeding up how code was implemented while preventing regressions.

    If tweaking your code can result in breaking a million things, this is a problem with your code and how you worked to make it resilient. LLMs are only able to introduce regressions if your automated tests are unable to catch any of these million of things breaking. If this is the case then your problems are far greater than LLMs existing, and at best LLMs only point out the elephant in the room.

Perhaps the issue is you were used to writing 200k lines of code. Most engineers would be agast at that. Lines of code is a debit not a credit

  • I am now making an emotional reaction based on zero knowledge of the B2B codebase's environment, but to be honest I think it is relevant to the discussion on why people are "worlds apart".

    200k lines of code is a failure state. At this point you have lost control and can only make changes to the codebase through immense effort, and not at a tolerable pace.

    Agentic code writers are good at giving you this size of mess and at helping to shovel stuff around to make changes that are hard for humans due to the unusable state of the codebase.

    If overgrown barely manageble codebases are all a person's ever known and they think it's normal that changes are hard and time-consuming and needing reams of code, I understand that they believe AI agents are useful as code writers. I think they do not have the foundation to tell mediocre from good code.

    I am extremely aware of the judgemental hubris of this comment. I'd not normally huff my own farts in public this obnoxiously, but I honestly feel it is useful for the "AI hater vs AI sucker" discussion to be honest about this type of emotion.

    • It really depends on what your use case is. E.g. of you're dealing with a lot of legacy integrations, dealing with all the edge cases can require a lot of code that you can't refactor away through cleverness.

      Each integration is hopefully only a few thousand lines of code, but if you have 50 integrations you can easily break 100k loc just dealing with those. They just need to be encapsulated well so that the integration cruft is isolated from the core business logic, and they become relatively simple to reason about

    • If all your code depends on all your other code, yeah 200k lines might be a lot. But if you actually know how to code, I fail to understand why 200k lines (or any number) of properly encapsulated well-written code would be a problem.

      Further, if you yourself don't understand the code, how can you verify that using LLMs to make major sweeping changes, doesn't mess anything up, given that they are notorious for making random errors?

    • > 200k lines of code is a failure state.

      What on earth are you talking about? This is unavoidable for many use-cases, especially ones that involve interacting with the real world in complex ways. It's hardly a marker of failure (or success, for that matter) on its own.

    • 200k loc is not a failure state. suppose your b2b saas has 5 user types and 5 downstream SAASes it connects to, thats 20k loc per major programming unit. not so bad.

      1 reply →

    • I agree on principle, and I'm sure many of us know how much of a pain it is to work on million or even billion dollar codebases, where even small changes can be weeks of beauracracy and hours of meetings.

      But with the way the industry is, I'm also not remotely surprised. We have people come and go as they are poached, burned out, or simply life circumstances. The training for the new people isn't the best, and the documentation for any but the large companies are probably a mess. We also don't tend to encourage periods to focus on properly addressing tech debt, but focusing on delivering features. I don't know how such an environment over years, decades doesn't generate so much redundant, clashing, and quirky interactions. The culture doesn't allow much alternative.

      And of course, I hope even the most devout AI evangelists realize that AI will only multiply this culture. Code that no one may even truly understand, but "it works". I don't know if even Silicon Valley (2014) could have made a parody more shocking than the reality this will yield.

  • In that case, LLMs are full on debt-machines.

    • Ones that can remediate it though. If I am capable of safely refactoring 1,000 copies of a method, in a codebase that humans don’t look at, did it really matter if the workload functions as designed?

      3 replies →

It's interesting how LLM enthusiasts will point to problems like IDE, context, model etc. but not the one thing that really matters:

Which problem are you trying to solve?

At this point my assumption is they learned that talking about this question will very quickly reveal that "the great things I use LLMs for" are actually personal throwaway pieces, not to be extended above triviality or maintained over longer than a year. Which, I guess, doesn't make for a great sales pitch.

  • It's amazing to make small custom apps and scripts, and they're such high quality (compared to what I would half-ass write and never finish/polish them) that they don't end up as "throwaway", I keep using them all the time. The LLM is saving me time to write these small programs, and the small programs boost my productivity.

    Often, I will solve a problem in a crappy single-file script, then feed it to Claude and ask to turn it into a proper GUI/TUI/CLI, add CI/CD workflows, a README, etc...

    I was very skeptical and reluctant of LLM assisted coding (you can look at my history) until I actually tried it last month. Now I am sold.

  • At work I need often smaller, short lived scripts to find this or that insight, or to use visualization to render some data and I find LLMs very useful at that.

    A non coding topic, but recently I had difficulty articulating a summarized state of a complex project, so I spoke 2 min in the microphone and it gave me a pretty good list of accomplishments, todos and open points.

    Some colleagues have found them useful for modernizing dependencies of micro services or to help getting a head start on unit test coverage for web apps. All kinds of grunt work that’s not really complex but just really moves quite some text.

    I agree it’s not life changing, but a nice help when needed.

  • I use it to do all the things that I couldn't be bothered to do before. Generate documentation, dump and transform data for one off analyses, write comprehensive tests, create reports. I don't use it for writing real production code unless the task is very constrained with good test coverage, and when I do it's usually to fix small but tedious bugs that were never going to get prioritized otherwise.

There is definitely a divide in users - those for which it works and those for which it doesn't. I suspect it comes down to what language and what tooling you use. People doing web-related or python work seem to be doing much better than people doing embedded C or C++. Similarly doing C++ in a popular framework like QT also yields better results. When the system design is not pre-defined or rigid like in QT, then you get completely unmaintainable code as a result.

If you are writing code that is/can be "heavily borrowed" - things that have complete examples on Github, then an LLM is perfect.

  • While I agree that AI assisted coding probably works much better for languages and use cases that have a lot more relevant training data, when I read comments from people who like LLM assisted coding vs. those that don't, I strongly get the impression that the difference has a lot more to do with the programmers than their programming language.

    The primary difference I see in people who get the most value from AI tools is that they expect it to make mistakes: they always carefully review the code and are fine with acting, in some cases, more like an editor than an author. They also seem to have a good sense of where AI can add a lot of value (implementing well-defined functions, writing tests, etc.) vs. where it tends to fall over (e.g. tasks where large scale context is required). Those who can't seem to get value from AI tools seem (at least to me) less tolerant of AI mistakes, and less willing to iterate with AI agents, and they seem more willing to "throw the baby out with the bathwater", i.e. fixate on some of the failure cases but then not willing to just limit usage to cases where AI does a better job.

    To be clear, I'm not saying one is necessarily "better" than the other, just that the reason for the dichotomy has a lot more to do with the programmers than the domain. For me personally, while I get a lot of value in AI coding, I also find that I don't enjoy the "editing" aspect as much as the "authoring" aspect.

    • > I strongly get the impression that the difference has a lot more to do with the programmers than their programming language.

      The problem with this perspective is that anyone who works on more niche programming areas knows the vast majority of programming discussion online aren't relevant to them. E.g., I've done macOS/iOS programming most of my career, and I now do work that's an order of magnitude more niche than that, and I commonly see programmers saying thing like "you shouldn't use a debugger", which is a statement that I can't imagine a macOS or iOS programmer saying (don't get me wrong they're probably out there, I've just never met or encountered one). So you just become use to most programming conversations being irrelevant to your work.

      So of course the majority of AI conversations aren't relevant to your work either, because that's the expectation.

      I think a lot of these conversations are two people with wildly different contexts trying to communicate, which is just pointless. Really we just shouldn't be trying to participate in these conversations (the more niche programmers that is), because there's just not enough shared context to make communication effective.

      We just all happen to fall under this same umbrella of "programming", which gives the illusion of a shared context. It's true there's some things that are relevant across the field (it's all just variables, loops, and conditionals), but many of the other details aren't universal, so it's silly to talk about them without first understanding the full context around the other persons work.

      2 replies →

    • It might boil down to individual thinking styles, which would explain why people tend to talk past each other in these discussions.

    • No one likes to hear it, but it comes down to prompting skill. People who are terrible at communicating and delegating complex tasks will be terrible at prompting.

      It's no secret that a lot of engineers are bad at this part of the job. They prefer to work alone (i.e. without AI) because they lack the ability to clearly and concisely describe problems and solutions.

      1 reply →

  • > If you are writing code that is/can be "heavily borrowed" - things that have complete examples on Github, then an LLM is perfect.

    I agree with the general premise. There is however more to it than "heavily borrowed". The degree to which a code base is organized and structured and curated plays as big of a role as what framework you use.

    If your project is a huge pile of unmaintainable and buggy spaghetti code then don't expect a LLM to do well. If your codebase is well structured, clear, and follows patterns systematically the of course a glorified pattern matching service will do far better in outputting acceptable results.

    There is a reason why one of the most basic vibecoding guidelines is to include a prompt cycle to clean up and refactor code between introducing new features. LLMs fare much better when the project in their context is in line with their training. If you refactor your project to align it with what a LLM is trained to handle, it will do much better when prompted to fill in the gaps. This goes way beyond being "heavily borrowed".

    I don't expect your average developer struggling with LLMs to acknowledge this fact, because then they would need to explain why their work is unintelligible to a system trained on vast volumes of code. Garbage in, garbage out. But who exactly created all the garbage going in?

  • > When the system design is not pre-defined or rigid like

    Why would a LLM be any worse building from language fundamentals (which it knows, in ~every language)? Given how new this paradigm is the far more obvious and likely explanation seems to be: LLM powered coding requires somewhat different skills and strategies. The success of each user heavily depends on their learning rate.

  • I suspect it comes down to how novel the code you are writing is and how tolerant of bugs you are.

    People who use it to create a proof of concept of something that is in the LLM training set will have a wildly different experience to somebody writing novel production code.

    Even there the people who rave the most rave about how well it does boilerplate.

  • I think there are still lots of code “artisans” who are completely dogmatic about what code should look like, once the tunnel vision goes and you realise the code just enables the business it all of a sudden becomes a velocity God send.

    • There are very good reason that code should look a certain way and it comes from years of experience and the fact that code is written once but read and modified much more.

      When the first bugs come up you see that the velocity was not god sent and you end up hiring one of the many "LLM code fixer" companies that are poping up like mushrooms.

      3 replies →

    • Two years in and we are waiting to see all you people (who are free of our tunnel vision) fly high with your velocity. I don't see anyone, am I doing something wrong?

      Your words predict an explosion of unimaginary magnitude for new code and for new buisnesses. Where is it? Nowhere.

      Edit: And dont start about how you vibed a SaaS service, show income numbers from paying customers (not buyouts)

      14 replies →

    • I'm not a code "artisan", but I do believe companies should be financially responsible when they have security breaches.

    • The issue is not with how code looks. It's with what it does, and how it does it. You don't have to be an "artisan" to notice the issues moi2388 mentioned.

      The actual difference is between people who care about the quality of the end result, and the experience of users of the software, and those who care about "shipping quickly" no matter the state of what they're producing.

      This difference has always existed, but ML tools empower the latter group much more than the former. The inevitable outcome of this will be a stark decline of average software quality, and broad user dissatisfaction. While also making scammers and grifters much more productive, and their scams more lucrative.

      2 replies →

And also ask: "How much money do you spend on LLMs?"

In the long run, that is going to be what drives their quality. At some point the conversation is going to evolve from whether or not AI-assisted coding works to what the price point is to get the quality you need, and whether or not that price matches its value.

I deal with a few code bases at work and the quality differs a lot between projects and frameworks.

We have 1-2 small python services based on Flask and Pydantic, very structured and a well-written development and extension guide. The newer Copilot models perform very well with this, and improving the dev guidelines keep making it better. Very nice.

We also have a central configuration of applications in the infrastructure and what systems they need. A lot of similarly shaped JSON files, now with a well-documented JSON schema (which is nice to have anyway). Again, very high quality. Someone recently joked we should throw these service requests at a model and let it create PRs to review.

But currently I'm working in Vector and it's Vector remap language... it's enough of a mess that I'm faster working without any copilot "assistance". I think the main issue is that there is very little VRL code out in the open, and the remaps depend on a lot of unseen context, which one would have to work on giving to the LLM. Had similar experiences with OPA and a few more of these DSLs.

> It's like we live in different worlds.

There is the huge variance in prompt specificity as well as the subtle differences inherent to the models. People often don't give examples when they talk about their experiences with AI so it's hard to get a read on what a good prompt looks like for a given model or even what a good workflow is for getting useful code out of it.

  • Some gave. Some even recorded it, and showed it, because they thought that they are good with it. But they weren’t good at all.

    They were slower than coding by hand, if you wanted to keep quality. Some were almost as quick as copy-pasting from the code just above the generated one, but their quality was worse. They even kept some bugs in the code during their reviews.

    So the different world is probably what the acceptable level of quality means. I know a lot of coders who don’t give a shit whether it makes sense what they’re doing. What their bad solution will cause in the long run. They ignore everything else, just the “done” state next to their tasks in Jira. They will never solve complex bugs, they simply don’t care enough. At a lot of places, they are the majority. For them, LLM can be an improvement.

    Claude Code the other day made a test for me, which mocked everything out from the live code. Everything was green, everything was good. On paper. A lot of people simply wouldn’t care to even review properly. That thing can generate a few thousands of lines of semi usable code per hour. It’s not built to review it properly. Serena MCP for example specifically built to not review what it does. It’s stated by their creators.

    • Honestly I think LLMs really shine best when your first getting into a language.

      I just recently got into JavaScript and typescript and being able to ask the llm how to do something and get some sources and link examples is really nice.

      However using it in a language I'm much more familiar with really decreases the usefulness. Even more so when your code base is mid to large sized

      5 replies →

    • > Some gave. Some even recorded it, and showed it, because they thought that they are good with it. But they weren’t good at all.

      Do you have any links saved by any chance?

  • I'm convinced that for coding we will have to use some sort of TDD or enhanced requirement framework to get the best code. Even on human made systems the quality is highly dependent on the specificity of the requirements and the engineer's ability to probe the edgecases. Something like writing all the tests first (even in something like cucumber) and having the LLM write code to get them to pass would likely produce better code evene though most devs hate the test-first paradigm.

My AI experience has varied wildly depending on the problem I'm working on. For web apps in Python, they're fantastic. For hacking on old engineering calculation code written in C/C++, it's an unmitigated disaster and an active hindrance.

  • Just last week I asked copilot to make a FastCGI client in C. It gave me 5 times code that did not compile. Afer some massaging I got it to compile, didn’t work. After some changes, works. No I say “i do not want to use libfcgi, just want a simple implementation”. After already one hour wrestling, I realize the whole thing blocks, I want no blocking calls… still half an hour later fighting, I’m slowly getting there. I see the code: a total mess.

    I deleted all, wrote from scratch a 350 lines file which wotks.

    • Context engineering > vibe coding.

      Front load with instructions, examples, and be specific. How well you write the prompt greatly determines the output.

      Also, use Claude code not copilot.

      3 replies →

It’s not just you, I think some engineers benefit a lot from AI and some don’t. It’s probably a combination of factors including: AI skepticism, mental rigidity, how popular the tech stack is, and type of engineering. Some problems are going to be very straightforward.

I also think it’s that people don’t know how to use the tool very well. In my experience I don’t guide it to do any kind of software pattern or ideology. I think that just confuses the tool. I give it very little detail and have it do tasks that are evident from the code base.

Sometimes I ask it to do rather large tasks and occasionally the output is like 80% of the way there and I can fix it up until it’s useful.

  • Yah. Latest thing I wrote was

    * Code using sympy to generate math problems testing different skills for students, with difficulty values affecting what kinds of things are selected, and various transforms to problems possible (e.g. having to solve for z+4 of 4a+b instead of x) to test different subskills

    (On this part, the LLM did pretty well. The code was correct after a couple of quick iterations, and the base classes and end-use interfaces are correct. There's a few things in the middle that are unnecessarily "superstitious" and check for conditions that can't happen, and so I need to work with the LLM to clean it up.

    * Code to use IRT to estimate the probability that students have each skill and to request problems with appropriate combinations of skills and difficulties for each student.

    (This was somewhat garbage. Good database & backend, but the interface to use it was not nice and it kind of contaminated things).

    * Code to recognize QR codes in the corners of worksheet, find answer boxes, and feed the image to ChatGPT to determine whether the scribble in the box is the answer in the correct form.

    (This was 100%, first time. I adjusted the prompt it chose to better clarify my intent in borderline cases).

    The output was, overall, pretty similar to what I'd get from a junior engineer under my supervision-- a bit wacky in places that aren't quite worth fixing, a little bit of technical debt, a couple of things more clever that I didn't expect myself, etc. But I did all of this in three hours and $12 expended.

    The total time supervising it was probably similar to the amount of time spent supervising the junior engineer... but the LLM turns things around quick enough that I don't need to context switch.

    • I think it's fair to call code LLM's similar to fairly bad but very fast juniors that don't get bored. That's a serious drawback but it does give you something to work with. What scares me is non-technical people just vibe coding because it's like a PM driving the same juniors with no one to give sanity checks.

  • > I also think it’s that people don’t know how to use the tool very well.

    I think this is very important. You have to look at what it suggests critically, and take what makes sense. The original comment was absolutely correct that AI-generated code is way too verbose and disconnected from the realities of the application and large-scale software design, but there can be kernels of good ideas in its output.

  • I think a lot of it is tool familiarity. I can do a lot with Cursor but frankly I find out about "big" new stuff every day like agents.md. If I wasn't paying attention or also able to use Cursor at home then I'd probably learn more inefficiently. Learning how to use rule globs versus project instructions was a big learning moment. As I did more LLM work on our internal tools that was also a big lesson in prompting and compaction.

    Certain parts of HN and Reddit I think are very invested in nay-saying because it threatens their livelihoods or sense of self. A lot of these folks have identities that are very tied up in being craftful coders rather than business problem solvers.

I think its down to language and domain more than tools.

No model ive tried can write, usefully debug or even explain cmake. (It invents new syntax if it gets stuck, i often have to prompt multiple AI to know if even the first response in the context was made-up)

My luck with embedded c has been atrocious for existing codebase (burning millions of tolkens), but passable for small scripts. (Arduino projects)

My experience with python is much better. Suggesting relevant libraries and functions, debugging odd errors, or even making small script on its own. Even the original github copilot which i got access to early was excellent on python.

Alot of people that seem to have fully embraced agentic vibe-coding seem to be in the web or node.js domain. Which I've not done myself since pre-AI.

I've tried most (free or trial) major models or schemes in hope that i find any of them useful, but not found much use yet.

> It's like we live in different worlds.

We probably do, yes. the Web domain compared to a cybersecurity firm compared to embedded will have very different experiences. Because clearly there's a lot more code to train on for one domain than the other (for obvious reasons). You can have colleagues at the same company or even same team have drastically different experiences because they might be in the weeds on a different part of tech.

> I then carefully review and test.

If most people did this, I would have 90% less issues with AI. But as we expect, people see shortcuts and use them to cut corners, not give more times to polish the edges.

What tech stack do you use?

Betting in advance that it's JavaScript or Python, probably with very mainstream libraries or frameworks.

  • FWIW. Claude Code does great job for me on complex domain Rust projects, but I just use it one relatively small feature/code chunk at the time, where oftentimes it can pick up existing patterns etc. (I try to point it at similar existing code/feature if I have it). I do not let it write anything creative where it has to come up with own design (either high-level architectural, or low level facilities). Basically I draw the lines manually, and let it color the space between, using existing reference pictures. Works very, very well for me.

  • Is this meant to detract from their situation? These tech stacks are mainstream because so many use them... it's only natural that AI would be the best at writing code in contexts where it has the most available training data.

    • > These tech stacks are mainstream because so many use them

      That's a tautology. No, those tech stacks are mainstream because it is easy to get something that looks OK up and running quickly. That's it. That's what makes a framework go mainstream: can you download it and get something pretty on the screen quickly? Long-term maintenance and clarity is absolutely not a strong selection force for what goes mainstream, and in fact can be an opposing force, since achieving long-term clarity comes with tradeoffs that hinder the feeling of "going fast and breaking things" within the first hour of hearing about the framework. A framework being popular means it has optimized for inexperienced developers feeling fast early, which is literally a slightly negative signal for its quality.

    • No, it's a clarification. There is massive difference between domains, and the parent post did not specify.

      If the AI can only decently do JS and Python, then it can fully explain the observed disparity in opinion of its usefulness.

  • You are exactly right in my case - JavaScript and Python dealing with the AWS CDK and SDK. Where there is plenty of documentation and code samples.

    Even when it occasionally gets it wrong, it’s just a matter of telling ChatGPT - “verify your code using the official documentation”.

    But honestly, even before LLMs when deciding on which technology, service, or frameworks to use I would always go with the most popular ones because they are the easiest to hire for, easiest to find documentation and answers for and when I myself was looking for a job, easiest to be the perfect match for the most jobs.

It could be the language. Almost 100% of my code is written by AI, I do supervise as it creates and steer in the right direction. I configure the code agents with examples of all frameworks Im using. My choice of Rust might be disproportionately providing better results, because cargo, the expected code structure, examples, docs, and error messages, are so well thought out in Rust, that the coding agents can really get very far. I work on 2-3 projects at once, cycling through them supervising their work. Most of my work is simulation, physics and complex robotics frameworks. It works for me.

As a practical example, I've recently tried out v0's new updated systems to scaffold a very simple UI where I can upload screenshots from videogames I took and tag them.

The resulting code included an API call to run arbitrary SQL queries against the DB. Even after pointing this out, this API call was not removed or at least secured with authentication rules but instead /just/hidden/through/obscur/paths...

I agree, it's like they looked at GPT 3.5 one time and said "this isn't for me"

The big 3 - Opus 4.1 GPT5 High, Gemini 2.5 Pro

Are astonishing in their capabilities, it's just a matter of providing the right context and instructions.

Basically, "you're holding it wrong"

Do you not think part of it is just whether employers permit it or not? My conglomerate employer took a long time to get started and has only just rolled out agent mode in GH Copilot, but even that is in some reduced/restricted mode vs the public one. At the same time we have access to lots of models via an internal portal.

  • Companies that don't allow their devs to use LLMs will go bankrupt and in the meantime their employees will try to use their private LLM accounts.

I am also constantly astonished.

That said, observing attempts by skeptics to “unsuccessfully” prompt an LLM have been illuminating.

My reaction is usually either:

- I would never have asked that kind of question in the first place.

- The output you claim is useless looks very useful to me.

B2B SaaS in most cases are sophisticated masks over some structured data, perhaps with great ux, automation and convenience, so I can see LLMs be more successful there, even so because there is more training data and many processes are streamlined. Not all domains are equal, go try develop a serious game, not the yet another simple and broken arcade, with llms and you'll have a different take

I think people react to AI with strong emotions, which can come from many places, anxiety/uncertainty about the future being a common one, strong dislike of change being another (especially amongst autists, whom I would guess based on me and my friend circle are quite common around here). Maybe it explains a lot of the spicy hot-takes you see here and on lobsters? People are unwilling to think clearly or argue in good faith when they are emotionally charged (see any political discussion). You basically need to ignore any extremist takes entirely, both positive and negative, to get a pulse on what's going on.

If you look, there are people out there approaching this stuff with more objectivity than most (mitsuhiko and simonw come to mind, have a look through their blogs, it's a goldmine of information about LLM-based systems).

It really depends, and can be variable, and this can be frustrating.

Yes, I’ve produced thousands of lines of good code with an LLM.

And also yes, yesterday I wasted over an hour trying to define a single docker service block for my docker-compose setup. Constant hallucination, eventually had to cross check everything and discover it had no idea what it was doing.

I’ve been doing this long enough to be a decent prompt engineer. Continuous vigilance is required, which can sometimes be tiring.

GitHub copilot, Microsoft copilot, Gemini, loveable, gpt, cursor with Claude models, you name it.

Lines of code is not a useful metric for anything. Especially not productivity.

The less code I write to solve a problem the happier I am.

It could be because your job is boilerplate derivatives of well solved problems. Enjoy the next 1 to 2 years because yours is the job Claude is coming to replace.

Stuff Wordpress templates should have solved 5 years ago.

Honestly the best way to get good code at least with typescript and JavaScript is to have like 50 eslint plugins

That way it constantly yells at sonnet 4 to get the code at least in a better state.

If anyone is curious I have a massive eslint config for typescript that really gets good code out of sonnet.

But before I started doing this the code it wrote was so buggy and it was constantly trying to duplicate functions into separate files etc

[flagged]

  • It is quite a putdown to tell someone else that if you wrote their program it would be 10 times shorter.

    That's not in keeping with either the spirit of this site or its rules: https://news.ycombinator.com/newsguidelines.html.

  • [flagged]

    • I understand the provocation, but please don't respond to a bad comment by breaking the site guidelines yourself. That only makes things worse.

      Your GP comment was great, and probably the thing to do with a supercilious reply is just not bother responding (easier said than done of course). You can usually trust other users to assess the thread fairly (e.g.https://news.ycombinator.com/newsguidelines.html

    • > What makes you think I'm not "a developer who strongly values brevity and clarity"

      Some pieces of evidence that make me think that:

      1. The base rate of developers who write massively overly verbose code is about 99%, and there's not a ton of signal to deviate from that base rate other than the fact that you post on HN (probably a mild positive signal).

      2. An LLM writes 80% of your code now, and my prior on LLM code output is that it's on par with a forgetful junior dev who writes very verbose code.

      3. 200K lines of code is a lot. It just is. Again, without more signal, it's hard to deviate from the base rate of what 200K-line codebases look like in the wild. 99.5% of them are spaghettified messes with tons of copy-pasting and redundancy and code-by-numbers scaffolded code (and now, LLM output).

      This is the state of software today. Keep in mind the bad programmers who make verbose spaghettified messes are completely convinced they're code-ninja geniuses; perhaps even more so than those who write clean and elegant code. You're allowed to write me off as an internet rando who doesn't know you, of course. To me, you're not you, you're every programmer who writes a 200k LOC B2B SaaS application and uses an LLM for 80% of their code, and the vast, vast majority of those people are -- well, not people who share my values. Not people who can code cleanly, concisely, and elegantly. You're a unicorn; cool beans.

      Before you used LLMs, how often were you copy/pasting blocks of code (more than 1 line)? How often were you using "scaffolds" to create baseline codefiles that you then modified? How often were you copy/pasting code from Stack Overflow and other sources?

    • At least to me what you said sounded like 200k is just with LLMs but before agents. But it's a very reasonable amount of code for 9 years of work.

  • This is such a bizarre comment. You have no idea what code base they are talking about, their skill level, or anything.

  • > I'm struggling to even describe... 200,000 lines of code is so much.

    The point about increasing levels of abstractions is a really good one, and it's worth considering whether any new code that's added is entirely new functionality, some kind of abstraction over some existing functionality (that might then reduce the need for as new code), or (for good or bad reason) some kind of copy of some of the existing behaviour but re-purposed for a different use case.

  • 200kloc is what, 4 reams of paper, double sided? So, 10% of that famous Margaret Hamilton picture (which is roughly "two spaceships worth of flight code".) I'm not sure the intuition that gives you is good but at least it slots the raw amount in as "big but not crazy big" (the "9 years work" rather than "weekend project" measurement elsethread also helps with that.)