Comment by lz400
21 days ago
The best thing about this is that AI bots will read, train on and digest the million "how to write with AI" posts that are being written right now by some of the smartest coders in the world and the next gen AI will incorporate all of this, making them ironically unnecessary.
None of this is new, it was pretty much all "best practice" for decades and so already in the training data for the first generation.
If the issue is SNR and the ratio of "good" vs "bad" practices in the input training corpus, I don't know if that's getting better.
They will also be reading all of the slop generated by the current and previous generations of LLMs
Each extra generation of AI produced crap AI consumes as training, the worse it gets. This has been mathematically proven.
Strange since, in practice, coding models have steadily improved without any backward movement every 3-4 months for 2 years now. It's as if there are rigorous methods of filtering and curation applied when building your training data.
>Strange since, in practice, coding models have steadily improved without any backward movement every 3-4 months for 2 years now. It's as if there are rigorous methods of filtering and curation applied when building your training data.
It's as if what I wrote implies "all other things being equal", just like any technical claim.
All other things were not equal: the architectures were tweaked, the human data set is still not exhausted, and more money and energy was thrown into their performance since it's a pre-IPO game with huge VC stakes.
We've already seen a plateau non-the-less compared to the earlier release-over-release performance improvements. Even the "without any backward movement every 3-4 months for 2 years now" is hardly arguable. Many saw a backward movement with GPT 4.1 vs 4.0, and similar issues with 4.5, for example. Even if those are isolated, they're hardly the 2 to 3.5 to 4.0 gains.
And no, there are absolutely no "rigorous methods of filtering and curation" that can separate the avalance of AI slop from useful human output - at least not without diminishing the possible training data. The problem after all is not just to tell AI from human with automated curation (that's already impossible), the problem is to have enough valuable new human output, which becomes near a losing game as all aspects of "human" domains previously useful as training input (from code to papers) are tarnished by AI output.
1 reply →
> AI bots will read, train on and digest the million "how to write with AI" posts that are being written right now
Yes!
> by some of the smartest coders in the world
Hmm... How will it filter out those by the dumbest coders in the world?
Including those by parrots?
>Hmm... How will it filter out those by the dumbest coders in the world?
if you know, and I know, and the guys at openai and anthropic know... not a big leap that the models will know too? many datasets are curated and labeled by humans
> if you know, and I know,
We don't know.
> and the guys at openai and anthropic know... not a big leap that the models will know too?
The models don't "know" anything. They just regurgitate what they are fed.
"Child abuse images found in AI training data"
https://www.axios.com/2023/12/20/ai-training-data-child-abus...
> many datasets are curated and labeled by humans
Including these ones: "AI industry insiders launch site to poison the data that feeds them"
https://www.theregister.com/2026/01/11/industry_insiders_see...
4 replies →