Comment by ACCount37
5 days ago
In the purely mechanical sense: LLMs get less self-awareness than humans, but not zero.
It's amazing how much of it they have, really - given that base models aren't encouraged to develop it at all. And yet, post-training doesn't create an LLM's personality from nothing - it reuses what's already there. Even things like metaknowledge, flawed and limited as it is in LLMs, have to trace their origins to the base model somehow.
No comments yet
Contribute on Hacker News ↗