Comment by re_chief
5 hours ago
I wouldn't describe the LLM's actions in the example as "solving a problem" so much as "following a well-established routine". If I were to, for instance, make a PB&J sandwich, I wouldn't say that what I'm doing is "real cooking" even if it might technically fit the definition.
If an LLM controlling a pair of robot hands was able to make a passable PB&J sandwich on my behalf, I _guess_ that could be useful to me (how much time am I really saving? is it worth the cost? etc.), but that's very different from those same robo-hands filling the role of a chef de cuisine at a fine dining restaurant, or even a cook at a diner.
In this analogy you're clearly a private chef with clients who have very specific wishes and allergies.
The rest of us are just pumping out CRUD-burgers off the API assembly line. Not exactly groundbreaking stuff.
LLMs are really good with burgers, but not so much being a private chef.
Every useful CRUD app becomes its own special snowflake with time and users.
Now if your CRUD app never gets any users sure it stays generic. But we’ve had low code solutions that solve this problem for decades.
LLMs are good at stuff that probably should have been low code in the first place, but couldn’t be for reasons. That’s useful, but it comes with a ton of trade offs. And these kind of solutions covet a lot less ground than you’d think.