Comment by dogscatstrees
6 hours ago
> As they did so, they also learned how to improve the prompts they gave AlphaEvolve. One key takeaway: The model seemed to benefit from encouragement. It worked better “when we were prompting with some positive reinforcement to the LLM,” Gómez-Serrano said. “Like saying ‘You can do this’ — this seemed to help. This is interesting. We don’t know why.”
Four top logical people in the world are acknowledging this. It is mind-blowing and we don't know why.
I know why.
Several people had problems with Sonnet burning through all their credits grinding on a problem it can't solve. Opus fixes this — it has a confidence threshold below which it exits the task instead of grinding.
"I spent ~$100 last week testing both against multiplication. Sonnet at 37-digit × 37-digit (~10³⁷) never quits — 15+ minutes, 211KB of output, still actively decomposing numbers when I stopped it. Opus will genuinely attempt up to ~50 digits (112K tokens on a real try), starts doubting around 55 digits, and by 80-digit × 80-digit surrenders in 330 tokens / 9 seconds with an empty answer." -- Opus, helping me with the data
The "I don't think this is worth attempting" heuristic is the difference. Sonnet doesn't have it, or has it set much higher. In order to get Opus and some other models to work on harder problems that it assumes it is not worth attempting, it requires an increase of confidence level.
I'll finish writing this up this week. I'm making flashy data visual animations to make the point right now.
So we have a bunch of imposter syndrome techies who would 5x with just a hint of encouragement, and now we’re trying to 2x them with LLMs, but in order to get there those same techies will have to gas up and inspire the LLM with the leadership and vision they themselves are wanting for?
The Universe seems against free lunches… if AGI is possible, finding a manager good enough to get the AGI to update its timesheet will not be (in practice).
It makes sense to me.
Originally LLMs would get stuck in infinite loops generating tokens forever. This is bad, so we trained them to strongly prefer to stop once they reached the end of their answer.
However, training models to stop also gave them "laziness", because they might prefer a shorter answer over a meandering answer that actually answered the user's question.
Mathematics is unusual because it has an external source of truth (the proof assistant), and also because it requires long meandering thinking that explores many dead ends. This is in tension with what models have been trained to do. So giving them some encouragement keeps them in the right state to actually attempt to solve the problem.
It was just yesterday that this top post [] was decrying the "peril of laziness lost", that LLMs inherently lack the virtue of laziness.
So which one are they?
[] https://news.ycombinator.com/item?id=47743628
I think laziness is not a minimum of effort. Laziness can actually be more effort towards a simpler or more practical solution, because those solutions are more pleasant in some way, and therefore more attractive to pursue.
1 reply →
Do we know why it works for humans?
Models are trained on human outputs. It’s not super surprising to me that inputs following encouraging patterns product better results outputs; much of the training material reflects that.
> Do we know why it works for humans?
Try to figure it out. You can do it.
If I had to wager a lazy, armchair guess, I think it forces it to think harder/longer
The answer is probably more straightforward than we think, e.g. “the user thinks I can do this so I better make sure I didn’t miss anything”
This seems pretty obvious, no?
It's pattern matching on training material. There is almost certainly an overlap between positivity and success in the training material. Positive prompts cause the pattern matching to weight towards positivity and therefor more successful material.
The training or system prompts have shoved the probabilities toward a space that tends to select “halt” sooner. You need to drag the probability weights around until they are less likely to reach “halt” so soon.
Nice language often sorta does this for whatever model(s) they looked at, and is also something people are likely to try. Probably lots and lots of nonsense token combos would work even better, but who’s gonna try sticking “gerontocratic green giant giraffes” on the end of their prompts to see if it helps?
Positive or negative language likely also prevents pulling the probabilities away from the correct topic, being so generic a thing. The above suggestion might only be ultra-effective if the topic is catalytic converters, for some reason, and push the thing into generating tokens about giraffes otherwise. How would you ever discover the dozens or thousands of more-effective but only-sometimes nonsense token combos? You’d need automation and a lot of brute force, or some better way to analyze the LLM’s database.