← Back to context

Comment by etothet

3 hours ago

Vibe Coding (and LLMs) did not create undisciplined engineering organizations or engineers. They exposed and accelerated them.

Plenty of engineers have loose (or no!) standards and practices over how they write coee. Similarly, plenty of engineering teams have weak and loose standards over how code gets pushed to production. This concept isn't new, it's just a lot easier for individuals and teams who have never really adhered to any sort of standards in their SDLC to produce a lot more code and flesh out ideas.

Bad engineers continue being bad, good engineers continue being good.

I personally don’t know any colleagues who were good engineers just because they wrote code faster. The best engineers I know were ones who drew on experience and careful consideration and shared critical insights with their team that steered the direction of the system positively.

> Claude, engineer a system for me, but do it good. Thanks!

  • > I personally don’t know any colleagues who were good engineers just because they wrote code faster

    Same, if anything, the opposite seems to be true, the ones that I'd call "good engineers" were slower, less panicked when production was down and could reason their way (slowly) through pretty much anything thrown at them.

    Opposite experience, I've sit next to developers who are trying their fastest to restore production and then making more mistakes to make it even worse, or developers who rush through the first implementation idea they had for a feature, missing to consider so many things and so on.

    • > Same, if anything, the opposite seems to be true, the ones that I'd call "good engineers" were slower

      Unfortunately, a lot of workplaces are ignoring this, believing their engineers are assembly line workers, and the ones who complete 10 widgets per minute are simply better than the ones who complete 5 widgets per minute.

      3 replies →

  • > I personally don’t know any colleagues who were good engineers just because they wrote code faster.

    However, the best engineers I know are usually among the quickest to open an editor or debugger and use it fluently to try something out. It's precisely that speed that enables a process like "let's try X, hmm, how about Y, no... ok, Z is nice; ok team, here are the tradeoffs...". Then they remember their experience with X, Y, and Z, and use it to shape their thinking going forward.

    Meanwhile, other engineers have gotten X to finally mostly work and are invested in shipping it because they just want to be done. In my experience, this is how a lot of coding agents seem to act.

    It's not obvious to me how to apply the expert loop to agentic coding. Of course you can ask your agent to try several different things and pick the best, or ask it to recommend architectural improvements that would make a given change easier...

    • Or: depth-first search of the solution space vs breadth-first (or balanced) search of the solution space.

      > Of course you can ask your agent to try several different things and pick the best, or ask it to recommend architectural improvements that would make a given change easier

      The ideal solution increasingly seems to be encoding everything that differentiates a good engineer from a bad engineer into your prompt.

      But at that point the LLM isn’t really the model as much as the medium. And I have some doubts that LLMs are the ideal medium for encoding expertise.

    • > However, the best engineers I know are usually among the quickest to open an editor or debugger and use it fluently to try something out

      The Pragmatic Programmer book has whole chapters about this. Ultimately, you either solve the problem analogously (whiteboard, deep thinking on a sofa). Or you got fast as trying out stuff AND keeping the good bits.

Yeah, a lot of people came of age with a "we'll fix it when it's a problem" mindset. Previously their codebases would start to resist feature development, you'd fix the immediate bottlenecks, and then you could kick the can down the road a bit until you hit the next point of resistance. You kinda refactor as you do features. The frontier models have pushed the "it's a problem" moment further back. They can kinda work with whatever pile of code you give them... to a point. So it manifests as the LLM introducing extra regressions, or dropping more requirements than it used to, but it's not really manifesting as the job being harder for you. It's just not as smooth as it was from an empty repository. Then you hit the point where it just breaks too much and you need to fix it. And the whole codebase is just fractal layers of decisions that you didn't make. That's hard to untangle. And you're not editing the code yourself, so you don't have that visceral "adding this specific thing in this specific way has a lot of tension" reaction that allows you to have those refactoring breakthroughs.

Vibe coded apps with barely no tests, invariants, etc. No wonder it turns into spaghetti. You can always refactor code, force agents to write small modular pieces and files. Good engineering is good engineering whether an agent or human wrote the code. Take time to force agents to refactor, explore choices. Humans must at least understand and drive architecture at this point still. Agents can help and do recon amazingly and provide suggestions.