Comment by willturman
9 days ago
Without a mechanism to detect output from LLMs, we’re essentially facing an eternal model collapse with each new ingestion of information from academic journals, to blogs, to art. [1][2]
[1] https://en.m.wikipedia.org/wiki/Model_collapse
[2]https://thebullshitmachines.com/lesson-16-the-first-step-fal...
No comments yet
Contribute on Hacker News ↗