Comment by K0balt
1 year ago
In general LLMs seem to function more reliably when you use pleasant language and good manners with them. I assume this is because because the same bias also shows up in the training data.
1 year ago
In general LLMs seem to function more reliably when you use pleasant language and good manners with them. I assume this is because because the same bias also shows up in the training data.
No comments yet
Contribute on Hacker News ↗