Comment by ethbr1
2 years ago
That's not entirely accurate.
LLMs encode some level of understanding of their training set.
Whether that's sufficient for a specific purpose, or sufficiently comprehensive to generate side effects, is an open question.
* Caveat: with regards to introspection, this also assumes it's not specifically guarded against and opaquely lying.
No comments yet
Contribute on Hacker News ↗