Comment by EGreg
2 years ago
Well LLMs are also “command-based”. They are called prompts. In fact they’d just continue the text but were specifically trained by RLHF to be command-following.
Actually, we can make automomous agents and agentic behavior without LLMs very well, for decades. And we can program them with declarative instructions much more precisely than with natural language.
The thing LLMs seem to do is just give non-experts a lot of the tools to get some basic things done that only experts could do for now. This has to do with the LLM modeling the domain space and reading what experts have said thus far, and allowing a non-expert to kind of handwave and produce results.
(I added a bit to the comment above, sorry)
I think there's a clear difference between a command and a declaration. Prompts are declarative.