Comment by drewvlaz
8 months ago
One of the largest issues I've experienced is LLMs being too agreeable.
I don't want my theories parroted back to me on why something went wrong. I want to have ideas challenged in a way that forces me to think and hopefully lead me to a new perspective that I otherwise would have missed.
Perhaps a large portion of people do enjoy the agreeableness, but this becomes a problem not only because I think there are larger societal issues that stem from this echo-chamber like environmental but also simply that companies training these models may interpret agreeableness as somehow better and something that should be optimized for.
That’s simple - after it tries to be helpful and agreeable I just ask for a “devils advocate” response. I have a much longer prompt I use sometimes involve being a “sparring partner”.
And I go back and forth sometimes between correcting its devils advocate responses and “steel man” responses.