Comment by speedylight
18 hours ago
I only have thoughts on your fourth question and in my mind the way LLMs work is they rely on the training data as it’s source information as well as how it formulates responses—In the same way that being nice to a person online leads to better results in terms of asking questions and such, it’s logical to conclude that LLMs would be more incentivized to produce more useful outputs than it would were you to talk to it like an asshole.
This is assuming that somewhere in the models weights there’s a strong correlation between being polite and high quality information.
No comments yet
Contribute on Hacker News ↗