← Back to context

Comment by koolba

6 months ago

I feel oddly prescient today: https://news.ycombinator.com/item?id=44217676

I really think you're wrong.

The processes we use to annotate content and synthetic data will turn AI outputs into a gradient that makes future outputs better, not worse.

It might not be as obvious with LLM outputs, but it should be super obvious with image and video models. As we select the best visual outputs of systems, slight errors introduced and taste-based curation will steer the systems to better performance and more generality.

It's no different than genetics and biology adapting to every ecological niche if you think of the genome as a synthetic machine and physics as a stochastic gradient. We're speed running the same thing here.

Nicely done! I think I've heard of this framing before, of considering content to be free from AI "contamination." I believe that idea has been out there in the ether.

But I think the suitability of low background steel as an analogy is something you can comfortably claim as a successful called shot.