← Back to context

Comment by joshwillik

2 hours ago

> The obvious objection is that code produced at that speed becomes unmanageable, a liability in itself. […] The liability argument holds in a human-to-human or agent-to-human world. In an agent-to-agent world, it largely dissolves.

Maybe there’s some new paradigm that makes this true. But it doesn’t seem obviously true to me.

Humans make the best code long term when everything orbits a vision of the underlying problem space.

LLMs seem to only consider the deeper problem space when I explicitly flag it for them, otherwise they write “good enough for this situation” type code. And that stack of patches type code is exactly how the code becomes messy and complicated in the first place.