Comment by echelon
6 months ago
I really think you're wrong.
The processes we use to annotate content and synthetic data will turn AI outputs into a gradient that makes future outputs better, not worse.
It might not be as obvious with LLM outputs, but it should be super obvious with image and video models. As we select the best visual outputs of systems, slight errors introduced and taste-based curation will steer the systems to better performance and more generality.
It's no different than genetics and biology adapting to every ecological niche if you think of the genome as a synthetic machine and physics as a stochastic gradient. We're speed running the same thing here.
I agree with you.
I voiced this same view previously here https://news.ycombinator.com/item?id=44012268
If something looks like ai, and if LLMs are that great at identifying patterns, who's to say this won't itself become a signal LLMs start to pickup on and improve through?