← Back to context

Comment by 59nadir

4 hours ago

Honestly, I think it's great that you could get the thing you wanted done.

Consider this, though: Your anecdote has nothing to do with software engineering (or an engineering mindset). No measurements were done, no technical aspects were taken into consideration (you readily admit that you lack the knowledge to do that), you're not expecting to maintain it or seemingly to further develop it much.

The above situation has never actually been hard; the thing you made is trivial to someone who knows the basics of a small set of things. LLMs (not Claude Code) have made this doable for someone who knows none of the things and that's very cool.

But all of this really doesn't mean anything for solutions to more complex problems where more knowledge is required, or solutions that don't even really exist yet, or something that people pay for, or things that are expected to be worked on continuously over time, perhaps by multiple people.

When people decry vibecoding as being moronic, the subtext really is (or should be) that they're not really talking to you; they're talking to people who are delivering things that people are expected to pay for, or rely on as part of their workflow, and people who otherwise act like their output/product is good when it's clearly a mess in terms of UX.

I get what you're saying, but imagine a CTO/CIO who's never been very technical. The world is full of them. They vibe up an app, and think it's easy. They don't have the developer experience to know the things they're missing.

While I downplayed my job experience, I'm very in touch with developers and their workflows; the challenges they face. And I'm scared because they won't be making these decisions about LLM usage; their bosses, the guy who vibe coded a dumb app over the weekend will.

  • Maybe I misunderstood the purpose of your post...? It seemed to me like you were arguing "Hey, what about me? Why shouldn't I vibecode since it enables me to do things that I couldn't before?" and that's what I wrote my comment addressing.

    I completely agree that people are going to be forced into using things that basically do not really work for anything non-trivial without massive handholding, and they will be forced to use those things by people who are out of touch and are mostly setting up to eventually get rid of as many people as they possibly can.

    • I (like many on HN I'm sure) have been continually pestered by management to use AI like it's some cure for polio. They just want to tell their VP that "my team is accelerating its use of AI!" so that the VP can pass that up the food chain. Same with when we started to migrate (unnecessarily imho) to the cloud. Just another checkbox and an attaboy from senior management.

      There's really not much of a place for AI in my work. We're not cutting edge, we're just a large, safe business protected by a regulatory moat. We don't want to be on the cutting edge, since the bleeding is bad for profits and reputation. But the incentives our IT execs operate under is all about resume/credential building and moving on to bigger things. Our C level officers are not even slightly technical, so they defer to the CIO. Nothing new at all in this company, it's a story told a thousand times.

      So I was just very curious how it would be to approach vibe coding as if I was my VP. You don't know what you don't know, right? And the ease at creating a simple app that would be beyond 99% of the people in my company gives way too much confidence. And with misplaced confidence comes poor decision-making.

      I can see where someone who currently is an Excel jockey would benefit from some of this stuff. As long as they can compare and test the outputs. But the danger from false confidence has to be an institutional risk that's being ignored.