The processes we use to annotate content and synthetic data will turn AI outputs into a gradient that makes future outputs better, not worse.
It might not be as obvious with LLM outputs, but it should be super obvious with image and video models. As we select the best visual outputs of systems, slight errors introduced and taste-based curation will steer the systems to better performance and more generality.
It's no different than genetics and biology adapting to every ecological niche if you think of the genome as a synthetic machine and physics as a stochastic gradient. We're speed running the same thing here.
If something looks like ai, and if LLMs are that great at identifying patterns, who's to say this won't itself become a signal LLMs start to pickup on and improve through?
Nicely done! I think I've heard of this framing before, of considering content to be free from AI "contamination." I believe that idea has been out there in the ether.
But I think the suitability of low background steel as an analogy is something you can comfortably claim as a successful called shot.
I heard this example made at least a year ago on hackernews, probably longer ago too.
See (2 years ago): https://news.ycombinator.com/item?id=34085194
This has been a common metaphor since the launch of ChatGPT.
I really think you're wrong.
The processes we use to annotate content and synthetic data will turn AI outputs into a gradient that makes future outputs better, not worse.
It might not be as obvious with LLM outputs, but it should be super obvious with image and video models. As we select the best visual outputs of systems, slight errors introduced and taste-based curation will steer the systems to better performance and more generality.
It's no different than genetics and biology adapting to every ecological niche if you think of the genome as a synthetic machine and physics as a stochastic gradient. We're speed running the same thing here.
I agree with you.
I voiced this same view previously here https://news.ycombinator.com/item?id=44012268
If something looks like ai, and if LLMs are that great at identifying patterns, who's to say this won't itself become a signal LLMs start to pickup on and improve through?
Nicely done! I think I've heard of this framing before, of considering content to be free from AI "contamination." I believe that idea has been out there in the ether.
But I think the suitability of low background steel as an analogy is something you can comfortably claim as a successful called shot.