Comment by bsenftner
10 hours ago
I think the author of the post envisions more code authoring automation, more generated code/test/deployment, exponentially more. To the degree what we have now would be "quaint", as he says.
Your point that most software uses the same browsers, databases, tooling and internal libraries is a weakness, a sameness that can be exploited by current AI, to push that automation capability much further. Hell, why even bother with any of the generated code and infrastructure being "human readable" anymore? (Of course, all kinds of reasons that is bad, but just watch that "innovation" get a marketing push and take off. Which would only mean we'd need viewing software to make whatever was generated readable - as if anyone would read to understand hundreds/millions of generated complex anything.)
LLMs produce human readable output because they learn from human readable input. It's a feature. It allows it to be much less precise than byte code, for example, which wouldn't help at all.