Comment by lubujackson

2 days ago

I predict we enter a world where these wand waving prompts are backed by well-structured frameworks that eliminate the need to dig in the code.

Originally I thought LLMs would add a new abstraction layer, like C++ -> PHP, but now I think we will begin replacing swaths of "logically knowable" processes one by one, with dynamic and robust interfaces. In other words, LLMs, if working under the right restrictions, will add a new layer of libraries.

A library for auth, a library for form inputs, etc. Extensible in every way with easy translation between languages. And you can always dig into the code of a library, but mostly they just work as-is. LLMs thrive with structure, so I think the real nexy wave will be adding various structures on top of general LLMs to achieve this.

This is possible. But when I read something like this, I just wonder: Why would this be more efficient than doing this with the same component we already call "libraries" - that is, a normal library or component created with some computer language - and just using AI to create and perfect those libraries more quickly?

I'm not even sure I disagree with your comment... I agree that I think LLMs will "add a new layer of libraries" ... but I think it seems fairly likely that they'll do that by generating a bunch of computer code?