← Back to context

Comment by brookst

6 hours ago

Do we know why it works for humans?

Models are trained on human outputs. It’s not super surprising to me that inputs following encouraging patterns product better results outputs; much of the training material reflects that.

If I had to wager a lazy, armchair guess, I think it forces it to think harder/longer

The answer is probably more straightforward than we think, e.g. “the user thinks I can do this so I better make sure I didn’t miss anything”