Comment by yunohn
19 days ago
I think this line of questioning leads to what we expect from LLMs. Do we want them to help the user as much as possible, even to their own detriment in edge cases? Or to be more human, and potentially be unable to help for various reasons including safety, but also lack of understanding (as is the case now)?
No comments yet
Contribute on Hacker News ↗