← Back to context

Comment by dheera

4 months ago

I absolutely hate the polarization around "vibe coding".

The whole point of AI agents is to eventually get good enough to do this stuff better than humans do. It's okay to dogfood and test them now and see how well they do, and improve them over time.

Software engineers will eventually become managers of AI agents. Vibe coding is just version 0.1 pre-alpha of that future.

"The whole point of AI agents is to eventually get good enough to do this stuff better than humans do"

You can be an enthusiastic adopter of AI tooling (like I am) without wanting them to eventually be better than humans at everything.

I'm very much still in the "augment, don't replace" camp when it comes to AI tooling.

  • No, I actually do want them to eventually be better than humans at everything.

    I'm okay with directing AI to write exactly the software I need, without having to manage "stakeholders", deal with the egos of boomers up the management chain, and all the shitty things certain people do in any organizations that get large enough.

    • Why do you think any of those are going away? You can be the best programmer in the world and you'll still be dealing with that.

      If you're in a situation where you don't have to deal with that, it won't be the result of using AI

> Software engineers will eventually become managers of AI agents.

Source? This seems very optimistic on both ends (that AI will replace SE work, AND SEs will still be employed to manage them).

> The whole point of AI agents is to eventually get good enough to do this stuff better than humans do. It's okay to dogfood and test them now and see how well they do, and improve them over time.

I agree with that. The problem I have is that people are getting sucked into the hype and evaluating the results of those tests with major rose-colored glasses. They glaze over all the issues and fool themselves into thinking that the overall result is favorable.

I think the issue most of us have, is that vibe-coding is not being treated as a dog fooding experiment, but as a legitimate way to deliver production code.

I am already seeing "vibe coding experts" and other attempts to legitimize the practice as a professional approach to software development.

The issue is clear, if you have accepted all PRs with no reviews as vibe-coding suggests, you will end up with security and functionality flaws.

Even if you do review, if you are the type of developer who thinks you can vibe-code a serious project, I doubt you are interested in regular security reviews.

Lastly, businesses are long lasting projects, and the state of AI is constantly evolving. Your codebases stability and speed of development is your businesses success, and if only the AI model who built it understands your code, you are on shaky ground when the models evolve. They are not constantly forward evolutions, there will be interactions that fail to fix or implement code in the codebase it previously wrote. Human engineers may not even be able or willing to save you, after 3 years of vibe coding your product.

  • > but as a legitimate way to deliver production code.

    Beyond the developer side of the hype that gets talked a lot, I'm witnessing a trend on the "company side" that LLM coding is a worthy thing to shell out $$$ to, IOW there is an expected return on investment for that $$$/seat, IOW it is expected to increase productivity by at least twice that much $$$.

    Companies already have a hard time throwing away the prototype - god forbid you showcase a flashy PoC - and priorise quality tasks (which may need to run over a quarter) over product items (always P0), and in that ROI context I don't see that LLM-assisted trend helping with software quality at all.

Anyone who has maintained code that written by engineers new to the industry, who didn’t understand the context of a system or the underlying principles of what they’re writing, may disagree.