Comment by daveguy
3 hours ago
It's because expressing emotion tests well in RLHF (reinforcement learning, human feedback), which is the layer on top of the next-token-predictor LLM. As a bonus, it helps manipulate operator reactions to incorrect output, and improve engagement (aka token use).
The "thought process" of an LLM only exists as inference response to next token prediction prompts. It's the illusion of emotion.
No comments yet
Contribute on Hacker News ↗