← Back to context

Comment by diarrhea

10 hours ago

Interesting, though I disagree on basically all points...

> No Silver Bullet

As an industry, we do not know how to measure productivity. AI coding also does not increase reliability with how things are going. Same with simplicity, it's the opposite; we're adding obscene complexity, in the name of shipping features (the latter of which is not productivity).

In some areas I can see how AI doubles "productivity" (whatever that means!), but I do not see a 10x on the horizon.

> Kernighan's Law

Still holds! AI is amazing at debugging, but the vast majority of existing code is still human-written; so it'll have an easy time doing so, as indeed AI can be "twice as smart" as those human authors (in reality it's more like "twice as persistent/patient/knowledgeable/good at tool use/...").

Debugging fully AI-generated code with the same AI will fall into the same trap, subject to this law.

(As an aside, I do wonder how things will go once we're out of "use AI to understand human-generated content", to "use AI to understand AI-generated content"; it will probably work worse)

> just ask AI to rewrite the code

This is a terrible idea, unless perhaps there is an existing, exhaustive test harness. I'm sure people will go for this option, but I am convinced it will usually be the wrong approach (as it is today).

> Dijkstra on the foolishness of programming in natural language

So why are we not seeing repos of just natural language? Just raw prompt Markdown files? To generate computer code on-the-fly, perhaps even in any programming language we desire? And for the sake of it, assume LLMs could regenerate everything instantly at will.

For two reasons. The prompts would either need to raise to a level of precision as to be indistinguishable from a formal specification. And indeed, because complexity does become "exponentially harder"; inaccuracies inherent to human languages would compound. We need to persist results in formal languages still. It remains the ultimate arbiter. We're now just (much) better at generating large amounts of it.

> Lehman’s Law

This reminds me of a recent article [0]. Let AI run loose without genuine effort to curtail complexity and (with current tools and models) the project will need to be thrown out before long. It is a self-defeating strategy.

I think of this as the Peter principle applied to AI: it will happily keep generating more and more output, until it's "promoted" past its competence. At which point an LLM + tooling can no longer make sense of its own prior outputs. Advancements such as longer context windows just inflate the numbers (more understanding, but also more generating, ...).

The question is, will the market care? If software today goes wrong in 3% of cases, and with wide-spread AI use it'll be, say, 7%, will people care? Or will we just keep chugging along, happy with all the new, more featureful, but more faulty software? After all, we know about the Peter principle, but it's unavoidable and we're just happy to keep on.

> Jevons Paradox

My understanding is the exact opposite. We might well see a further proliferation of information technologies, into remaining sectors which have not yet been (economically) accessible.

0: https://lalitm.com/post/building-syntaqlite-ai/

> The question is, will the market care? If software today goes wrong in 3% of cases, and with wide-spread Al use it'll be, say, 7%, will people care? Or will we just keep chugging along, happy with all the new, more featureful, but more faulty software?

This is THE question. I honestly think the majority will gladly take an imperfect app over waiting for a perfect app or perhaps having no app at all. Some devs might be able to stand out with a polished app taking the traditional approach but it takes a lot longer to achieve that and by that point the market may be different, which is a risk