← Back to context

Comment by SketchySeaBeast

3 days ago

I get that feeling too - the underlying tech has plateaued, but now they're brute force trading extra time and compute for better results. I don't know if that scale anything but, at best, linearly. Are we going to end up with 10,000 AI monkeys on 10,000 AI typewriters and a team of a dozen monkeys deciding which one's work they like the most?

> the underlying tech has plateaued, but now they're brute force trading extra time and compute for better results

You could say the exact same thing about the original GPT. Brute forcing has gotten us pretty far.

  • How much farther can it take us? Apparently they've started scaling out rather than up. When does the compute become too cost prohibitive?

    • Until recently, training-time compute was the dominant cost, so we're really just getting started down the test-time scaling road.