← Back to context

Comment by Schlagbohrer

14 hours ago

What do you think about Cory Doctorow's theory that the AI produced code is going to come back to bite companies due to tech debt / unmaintainability?

I am skeptical of Doctorow's theory because it looks like LLMs will continue to improve enough over the near term to be able to handle issues caused by AI-written code from the past few years.

I've heard OpenClaw got over 600k lines of code vibed over 80 days.

I have this theory that the bloat will follow to the full extent possible. OpenClaw has this, the OpenEye or whatever that comes on another day, with better models, will have 3 million lines of code. All of the possibilities that you mention will not come to fruition the way you'd like to, because speed is preferred over building better things, and to hell with maintainability.

Eventually these things will become a ton of black boxes, and the only option will be to write them from scratch with another next gen LLM. Lots of costly busywork, and it will all take time.

  • Tech debt and maintainability were important because time was of the essence in another era. If the cycles get compressed by say 95%, to hell with it, just trash the old and write everything from scratch, start from a clean slate each time?

    • That may be good enough for consumer facing systems. Rewrites seldom go well for enterprise systems of record because the code embodies a lot of undocumented but critical requirements. If you start vibe coding from a clean slate then all of that knowledge is lost and you've created an even bigger problem.

  • Claude Code is similar. It's fairly clean for AI coding standards but it's also most likely much, much bigger than what it should be for what it does.

In the mature service I worked on, adding new code was “templatized”, you had to add feature flags, logs etc which didn’t vary much no matter which feature it was. The business logic was also not that complex, I can see AI tools one-shotting that and it indeed is a productivity boost. You would be surprised to know that most work was exactly this, writing rather mundane code. Majority of the time was spent coordinating with “stakeholders” (actually more like gate keepers) and testing code (our testing infrastructure was laborious). This was at MSFT. There are lot of teams that are innovating at the frontier (mine wasn’t, at-least not technically), I don’t know how AI tools work in those situations.

the near term is not an issue, because most ai code is still reviewed by experienced engineers with experience. the problem comes in the future where junior engineers who never acquired enough experience to handle engineering problems