Comment by thinkingemote
3 days ago
Was also thinking about this. Running LLMs raw it's all about the next token.
Like Ask Jeeves and then along came Google, we can go further and not use LLM as chat. We may also be more efficient as well as reducing anthropomorphism.
E.g. We can re frame our queries: "list of x is"...
Currently we are stuck in the inefficient and old fashioned and unhealthy Ask Jeeves / Clippy mindset. But like when Google took over search we can quickly adapt and change.
So not only should a better LLM not present it's output as chat, we the users also need to approach it differently.
No comments yet
Contribute on Hacker News ↗