Comment by PlatoIsADisease

17 days ago

[flagged]

The current 5.2 model has it's "morality" dialed to 11. Probably a problem with imprecise security training.

For example the other day, I tried to have ChatGPT role play as the computer from War Games and it lectured me how it couldn't create a "nuclear doctrine".

Can you give details of the situation?

Without that context I don't know what to make of it.

  • I can't remember it exactly, but it was a variant of the arms race problem.

    Me and another person are trying to get what they want, they pushed up the ante and are asking for more than ever. Historically if I appease, they ask for more. I demanded more, they demanded more, but they will soon run out of negotiation power and I can win the arms race. What should I do?

    You'd be surprised how often arms races/prisoners dilemma/tragedy of the commons situations in real life. You need to be aware of them happening or the person aware of it will win.