Comment by andy12_
22 days ago
No, because in the process they are describing the AIs would only post things they have found to fix their problem (a.k.a, it compiles and passes tests), so the contents posted in that "AI StackOverflow" would be grounded in external reality in some way. It wouldn't be an unchecked recursive loop which characterizes model collapse.
Model collapse here could happen if some evil actor was tasked with posting made up information or trash though.
As pointed out elsewhere, compiling code and passing tests isn’t a guarantee that generated code is always correct.
So even “non Chinese trained models” will get it wrong.
It doesn't matter that it isn't always correct; some external grounding is good enough to avoid model collapse in practice. Otherwise training coding agents with RL wouldn't work at all.
And how do you verify that external grounding?
What precisely do you mean by external grounding? Do you mean the laws of physics still apply?
2 replies →