Comment by colechristensen
13 hours ago
No, they just need to be trained to have adversarial self review "thinking" processes.
You ask an LLM "What's wrong with your answer?" and you get pretty good results.
13 hours ago
No, they just need to be trained to have adversarial self review "thinking" processes.
You ask an LLM "What's wrong with your answer?" and you get pretty good results.
Or you get the original output result was perfect and the adversarial "rethinking" switches to an incorrect result.
this seems to happen far more than i would like