Comment by mehulashah
5 days ago
The AI scaling that went on for the last five years is going to be very different from the scaling that will happen in the next ten years. These models have latent capabilities that we are racing to unearth. IMO is but one example.
There’s so much to do at inference time. This result could not have been achieved without the substrate of general models. Its not like Go or protein folding. You need the collective public global knowledge of society to build on. And yes, there’s enough left for ten years of exploration.
More importantly, the stakes are high. There may be zero day attacks, biological weapons, and more that could be discovered. The race is on.
Latent??
If you looked at RLHF hiring over the last year, there was a huge hiring of IMO competitors to RLHF. This was a new, highly targeted, highly funded RLHF’ing.
Can you provide any kind of source? Very curious about this!
https://work.mercor.com/jobs/list_AAABljpKHPMmFMXrg2VM0qz4
https://benture.io/job/international-math-olympiad-participa...
https://job-boards.greenhouse.io/xai/jobs/4538773007
And Outlier/Scale, which was bought by Meta (via Scale), had many IMO-required Math AI trainer jobs on LinkedIn. I can't find those historical ones though.
I'm just one piece in the cog and this is an anecdote, but there was a huge upswing in IMO or similar RLHF job postings over the past 6mo-year.
2 replies →
Yup, we have bootstrapped to enough intelligence in the models that we can introduce higher levels of ai