Comment by PlatoIsADisease
8 hours ago
I asked chatGPT to give me a solution to a real world prisoners dilemma situation. It got it wrong. It moralized it. Then I asked it to be Kissinger and Machiavelli (and 9 other IR Realists) and all 11 got it wrong. Moralized.
Grok got it right.
The current 5.2 model has it's "morality" dialed to 11. Probably a problem with imprecise security training.
For example the other day, I tried to have ChatGPT role play as the computer from War Games and it lectured me how it couldn't create a "nuclear doctrine".
so it took the only winning move
Can you give details of the situation?
Without that context I don't know what to make of it.