Comment by muzani
6 hours ago
I find the inverse as well - asking a LLM to be chatty ends up with a much higher output. I've experimented with a few AI personality and telling it to be careful etc matters less than telling it to be talkative.
6 hours ago
I find the inverse as well - asking a LLM to be chatty ends up with a much higher output. I've experimented with a few AI personality and telling it to be careful etc matters less than telling it to be talkative.
No comments yet
Contribute on Hacker News ↗