Comment by hattmall
12 hours ago
That seems really paradoxical and I think it would just burn up compute. The AI really doesn't have any way to know it's getting better without humans telling. As soon as the AI begins to recursively improve based on its own definition of improvement model collapse seems unavoidable.
If humans are able to judge, and if the AI is more capable than a human in every respect, then why can't the AI be the judge of its own performance? Humans judge their own output all the time.
The difference IMO is that every single human is a slightly different model, not the same one with a different prompt, or weights.
I'm not sure I buy that competition between individuals is a hard requirement but lets assume that to be the case for now. Then how many variants of itself do you suppose an AI could instantiate in parallel given full control of a gigawatt class datacenter?
Humans ultimately judge their output by comparison and competition. When we get to the point an AI is capable of participating on the market directly, it'll no longer make sense to proxy judgement through humans anymore.
Agreed. But also, comparison and competition between individuals is only one of the ways in which improvement happens. Consider for example that it's also possible to build something for personal consumption and iteratively improve on the design without regard for what anyone else thinks of it. Cooking comes to mind.
1 reply →