Comment by simonw

8 days ago

I think the borderline is when you take responsibility for the code, and stop blaming the LLM for any mistakes.

That's the level of responsibility I want to see from people using LLMs in a professional context. I want them to take full ownership of the changes they are producing.

Sounds good, however the bar is probably too far and far too idealistic.

The effects of vibecoding destroys trust inside teams and orgs, between engineers.

  • As would the effects of shipping unverified, untested code pre-agents existing. Bad quality will always erode trust.

    The problem with LLM-based coding is that the speed it can generate code (whether good or bad) is much faster than before.

I don't blame the agent for mistakes in my vibe coded personal software, it's always my fault. To me it's like this:

80%+: You don't understand the codebase. Correctness is ensured through manual testing and asking the agent to find bugs. You're only concerned with outcomes, the code is sloppy.

50%: You understand the structure of the codebase, you are skimming changes in your session, but correctness is still ensured mostly through manual testing and asking the agent to review. Code quality is questionable but you're keeping it from spinning out of control. Critically, you are hands on enough to ensure security, data integrity, the stuff that really counts at the end of the day.

20%-: You've designed the structure of the codebase, you are writing most of the code, you are probably only copypasting code from a chatbot if you're generating code at all. The code is probably well made and maintainable.

  • I feel like there’s one more dimension. For me, 95%+ of code that I ship has been written (i.e. typed out) by a LLM, but the architecture and structure, down to method and variable names, is mine, and completely my responsibility.