Comment by slopinthebag

8 days ago

It's so difficult to quantify productivity over an entire field, especially when it's so vast. Chris Lattner recently concluded this about LLM tooling [0]:

> AI systems can internalize the textbook knowledge of a field and apply it coherently at scale. AI can now reliably operate within established engineering practice. This is a genuine milestone that removes much of the drudgery of repetition and allows engineers to start closer to the state of the art.

This matches my experience, there is a lot of code that we probably should not need to write and rewrite anymore but still do because this field has largely failed at deriving complete and reusable solutions to trivial problems - there is a massive coordination problem that has fragmented software across the stack and LLMs provide one way of solving it by generating some of the glue and otherwise trivial but expensive and unproductive interop code required.

But the thing about productivity is that it's not one thing and cannot be reduced to an anecdote about a side-project, or a story about how a single company is introducing (or mandating) AI tooling, or any single thing. Being able to generate a bunch of code of varying quality and reliability is undeniably useful, but there are simply too many factors involved to make broad sweeping claims about an entire industry based on a tool that is essentially autocomplete on crack. Thus it's not surprising that recent studies have not validated the current hype cycle.

[0] https://www.modular.com/blog/the-claude-c-compiler-what-it-r...