Comment by null_deref

9 hours ago

Does the use AI always implies slope and vibe coding? I’m really not sure

No, it doesn't. For example, you could use an AI agent just to aid you in code search and understanding or for filling out well specified functions which you then do QA on.

  • To do quality QA/code review, one of course needs to understand the design decisions/motivations/intentions (why those exact code lines were added, and why they are correct), meaning it is the same job as one would originally code those lines and building the understanding==quality on the way.

    For the terminology, I consider "vibe-coding" as Claude etc. coding agents that sculpts entire blocks of code based on prompts. My use-tactic for LLM/AI-coding is to just get the signature/example of some functions that I need (because documents usually suck), and then coding it myself. That way the control/understanding is more (and very egoistically) in my hands/head, than in LLMs. I don't know what kind of projects you do, but many times the magic of LLMs ends, and the discussion just starts to go same incorrect circle when reflected on reality. At that point I need to return to use classic human intelligence.

    And for COBOL + AI, in my experience mentioning "COBOL" means that there is usually DB + UI/APP/API/BATCHJOB for interacting with it. And the DB schema + semantics is propably the most critical to understand here, because it totally defines the operations/bizlogic/interpretations for it. So any "AI" would also need to understand your DB (semantically) fully to not make any mistakes.

    But in any case, someone needs to be responsible for the committed code, because only personified human blame and guilt can eventually avert/minimize sloppiness.

  • You 100% can use it this way. But it takes a lot of discipline to keep the slop out of the code base. The same way it took discipline to keep human slop out.

    There has always been a class of devs who throw things at the wall and see what sticks. They copy paste from other parts of the application, or from stack overflow. They write half assed tests or no tests at all and they try their best to push it thought the review process with pleas about how urgent it is (there are developers on the opposite side of this spectrum who are also bad).

    The new problem is that this class of developer is the exact kind of developer who AI speeds up the most, and they are the most experienced at getting shit code through review.

    • > But it takes a lot of discipline to keep the slop out of the code base.

      It is largely a question of working ethics, rather than a matter of discipline per se.

Because the question almost always comes with an undertone of “Can this replace me?”. If it’s just code search, debugging, the answer’s no because a non-developer won’t have the skills or experience to put it all together.

  • That undertone is overt in the statements of CEOs and managers who salivate at “reducing headcount.”

    The people who should fear AI the most right now are the offshore shops. They’re the most replaceable because the only reason they exist is the desire to carve off low skill work and do it cheaply.

    But all of this overblown anyway because I don’t see appetite for new software getting satiated anytime soon, even if we made everyone 2x productive.