Comment by wordofx

7 days ago

[flagged]

Absurd take. Speed is not the issue! Optimising for speed of production is what got us into the utter quagmire that is modern software.

Lack of correctness, lack of understanding and ability to reason about behaviour, and poor design that builds up from commercial pressure to move quickly are the problems we need to be solving. We’re accelerating the rate at which we add levels to a building with utterly rotten foundations.

God damn it, I’m growing to loathe this industry.

  • > Lack of correctness, lack of understanding and ability to reason about behaviour, and poor design that builds up from commercial pressure to move quickly are the problems we need to be solving.

    So good developers will become amazing with the assistance of ai while the rest will become unemployed and find work elsewhere. So we are healing the industry. Because without ai. The industry is a hell of a lot worse. You only have to look at the replies on HN to see how laughable the industry is.

Why is this line of thinking so common with AI folk? Is it just inconceivable to you that other people have different experiences with a technology that has only become widespread in the past couple years and that by its very nature is non deterministic?

  • For what it's worth, I basically accept the premise of the GP comment, in the same way that I would accept a statement that "loggers who don't use a chainsaw will be put out of business". Sure, fine, whatever.

    I still think the tone is silly and polarizing, particularly when it's replying to a comment where I am very clearly not arguing against use of the tools.

    • It assumes the result though. These comments presuppose that LLMs are universally good and useful and positive when that is the very argument that is being debated, and then uses the presupposition to belittle the other side of the debate.

      2 replies →

As if going faster is the only goal of a programmer.

Some simulation I worked on for 2 months were in total 400 lines of code. Typing it out was never the bottleneck. I need to understand the code so that when I am studying the code for the next 1 1/2 months I can figure out if the problem is a bug in my code, or the underlying model is wrong.

Or they work with languages, libraries, systems or problem areas where the LLMs fail to perform anywhere near as well as they do for you and me.

  • About libraries or systems unknown to AI: you can fine tune or RAG LLMs e.g. with a MCP server like Context7 about special knowledge/libraries to make it a more knowledgeable companion when it was not trained so well (or at all) about the topic you need for your work. Also own defined specs etc. help.

    • You need a good amount of example code to train it on. I find LLMs moderately useful for web dev, but fairly useless for embedded development. They'll pick up some project-specific code patterns, but they clearly have no concept of what it means to enable a pull-up on a GPIO pin.

  • Still haven’t seen an example. It’s always the same. People don’t want to give hints or context. The moment you start doing things properly it’s “oh no this is just a bad example. It still can’t do what u do”

    • My experience is the opposite. I've yet to see a single example of AI working well for non trivial work that I consider relevant, based on 15+ years of experience in this field. It's good for brainstorming, writing tests, and greenfield work / prototyping. Add business context more complicated than can be explained in a short sentence, or any nuance or novelty, and it becomes garbage pretty much instantly.

      Show me an AI agent adding a meaningful new feature or fixing a complicated bug in an existing codebase that serves the needs of a decent sized business. Or proposing and implementing a rearchitecture that simplifies such a codebase while maintaining existing behavior. Show me it doing a good job of that, without a prompt from an experienced engineer telling it how to write the code.

      These types of tasks are what devs spend their days actually doing, as far as coding is concerned (never mind the non coding work, which is usually the harder part of the job). Current AI agents simply can't do these things in real world scenarios without very heavy hand holding from someone who thoroughly understands the work being done, and is basically using AI as an incredibly fast typing secretary + doc lookup tool.

      With that level of hand holding, it does probably speed me up by anywhere from 10% to 50% depending on the task - although in hindsight it also slows me down sometimes. Net hours saved is anywhere from 0 to 10 per week depending on the week, erring more on the lower end of that distribution.

The thing is, the AI tools are so easy to use and can be picked up in a day or too by an experienced programmer without any productivity loss

I don't get why people push this LLM fomo. The tools are evolving so fast anyways

  • I'm an experienced engineer who is AI skeptical overall. I've continued to try these tools as they evolved. Sometimes they're neat, oftentimes they fail spectacularly and sometimes they fail in very pernicious ways.

    If it keeps getting better, I'll just start using it more. It's not hard to use, so the FOMO "you have to be using this RIGHT NOW" stuff is just ridiculous.

I've not yet been in a position where reading + cleaning up the LLMs bad code was faster and/or produced better code than if I wrote it by hand. I've tried. Every time someone comes up and says "yeah of course you're not using GPT4.7-turbo-plus-pro" I go and give a spin on the newfangled thing. Nope, hasn't happened yet.

I admit my line of work may not be exactly generic crud work, but then again if it's not useful for anything just one step above implementing a user login for a website or something, then is it really gonna take over the world and put me out of a job in 6 months?

  • Same for me. My last try was with claude code on a fairly new and simple Angular 19 side project. Spew garbage code using the old angular stuff (without signals). Failed to reuse the code that was already here so needed refactor. The features I asked for were simple, so I clearly lost my time prompting + reading + refactoring the result. So I spent the credits and never used it again.

    • So your inability to prompt, hint, provide context, setup guard rails for the ai to do what you want, is the fault of ai? Sorry to say you don’t know what you’re doing. This isn’t the fault of ai. This is your inability to learn.

      3 replies →

It is absolutely hilarious to read the responses from people who can’t use ai make attempts to justify their ability to code better than ai. These are the people who will be replaced. They are fighting so hard against it instead of learning how to use it.

“I wrote 400 lines of code I don’t understand and need months to understand it because ai obviously cant understand it or break it down and help me document it”

“Speed is what caused problems! Because I don’t know how to structure code and get ai to structure it the same it’s obviously going rogue and doing random things I cannot control so it’s wrong and causing a mess!!!”

“I haven’t been able to use it properly so don’t know how to rein it in to do specific tasks so it produces alot of stuff that takes me ages to read! I could have written it faster!!!”

I would love to see what these people are doing 1-2 years from now. If they eventually click or if they are unemployed complaining ai took their jobs.

  • Honestly, the one through line that I've seen with regards to the success of AI in programming is that it'll work very well for trivial, mass-produced bullshit and anyone who was already doing that for work will feel like it can do their job (and it probably can) almost entirely.

    I don't really doubt that AI can put together your Nth Rails backend that does nothing of note pretty solidly, but I know it can't even write a basic, functioning tokenizer + parser in a very simple, imperative language (Odin) for a Clojure-like language. It couldn't even (when given the source for a tokenizer) write the parser that uses the tokenizer either.

    These are very basic things that I would expect juniors with some basic guidance to accomplish, but even when using Cursor + Claude Sonnet 3.5 (this was 2-3 months ago, I had seen recommendations about exactly the combinations of tools I was attempting to use, so I don't really buy the argument that somehow it was the choice of tools that was wrong) it fell apart and even started adding functions it already added before. At some point I seeded it with properly written parser functions to give it examples of what it needs to accomplish, but it kept basically failing completely when having access to literally all the code it needed.

    I can't even imagine how badly it'd fail to handle the actual complicated parts of my work where you have to think across 3 different context boundaries (simulation -> platform/graphics API -> shader) in order to do things.

    • > I don't really doubt that AI can put together your Nth Rails backend that does nothing of note pretty solidly

      Ha. Funny you should say that....recently I've been using AI to green-field a new Rails project, and my experience with it has been incredibly mixed, to say the least.

      The best agents can, more or less, crank out working code after a few iterations, but it's brittle, and riddled with bad decisions. This week I had to go through multiple prompt iterations trying to keep Claude 3.7 from putting tons of redundant logic in a completely unnecessary handler block for ActiveRecord::RecordNotFound exceptions -- literally 80% of the action logic was in the exception handler, for an exception that isn't really exceptional. It was like working with someone who just learned about exceptions, and was hell-bent on using them for everything. If I wasn't paying attention the code may have worked, I suppose, but it would have fallen apart quickly into an incomprehensible mess.

      The places where the AI really shines are in boilerplate situations -- it's great for writing an initial test suite, or for just cranking out a half-working feature. It's also useful for rubber ducking, and more than occasionally breaks me out of debugging dead ends, or system misconfiguration issues. That's valuable.

      In my more cynical moments, I start to wonder if the people who are most eager to push these things are 0-3 years out of coding bootcamps, and completely overwhelmed by the boilerplate of 10+ years of bad front-end coding practices. For these folks, I can easily see how a coding robot might be a lifeline, and it's probably closer to the sweet spot for the current AI SOTA, where literally everything you could ever want to do has been done and documented somewhere on the web.

      2 replies →

  • Why wouldn't AI eventually take jobs of people who "know how to use it" as well? If AI makes engineers more productive, then you need less of them.

    I utilize AI as a part of my workflows, but I'm pretty sure I'll be replaced anyway in 5-10 years. I think software development is a career dead-end now, except if you're doing things much closer to hardware than average dev.