Comment by EastLondonCoder
5 hours ago
I think the article make some great points, however this part is not even wrong:
"The obvious objection is that code produced at that speed becomes unmanageable, a liability in itself. That is a reasonable concern, but it largely applies when agents produce code that humans then maintain. Agentic platforms are being iterated upon quickly, and for established patterns and non-business-critical code, which is the majority of what most engineering organizations actually maintain, detailed human familiarity with the codebase matters less than it once did. A messy codebase is still cheaper to send ten agents through than to staff a team around. And even if the agents need ten days to reason through an unfamiliar system, that is still faster and cheaper than most development teams operating today. The liability argument holds in a human-to-human or agent-to-human world. In an agent-to-agent world, it largely dissolves."
LLMs are not conscious, that means left on their own devices they will drift. I think the single most important issue when working with LLMs is that they write text without a layer that are aware what's actually being written. That state can be present in humans as well, like for example in sleepwalking.
Everyone who's tried to to complete vibe coding a somewhat larger project knows that you only get to a certain level of complexity until the model stops being able to reason about the code effectively. It starts to guess why something is not working and cannot get out of that state until guided by a human.
That is not new state in the field, I believe all programmers has at points in their career come across code that's been written with developers needing to get over a hard deadline with the result of a codebase that cannot effectively be modified.
I think for a certain subsets of programming projects some projects could possibly be vibe coded as in that code can be merged without human understanding. But it has to be very straightforward crud apps. In almost everything else you will get stopped by slop.
I suspect that the future of our profession will shift from writing code to reading code and to apply continuous judgement on architecture working together with LLMs. Its also worth keeping in mind that you cannot assign responsibility to an LLM and most human organization requires that to work.
No comments yet
Contribute on Hacker News ↗