Comment by spamizbad
7 hours ago
> AI's / LLM's have already been trained on best practices for most domains.
I've been at this long enough to see that today's best practices are tomorrow's anti-patterns. We have not, in fact, perfected the creation of software. And the your practices will evolve not just with the technology you use but the problem domains you're in.
I don't mean this as an argument against LLMs or vibe coding. Just that you're always going to need a fresh corpus to train them on to keep them current... and if the pool of expertly written code dries up, models will begin to stagnate.
I've been doing this a long time too. The anti-patterns tend to come from the hype cycles of "xyz shiny tool/pattern will take away all the nasty human problems that end up creating bad software". Yes, LLMs will follow this cycle too, and, I agree we are in a kind of sweet spot moment for LLMs where they were able to ingest massive amounts of training material from the open web. That will not be the case going forward, as people seek to more tightly guard their IP. The (open) question is whether the training material that exists plus whatever the tools can self generate is good enough for them to improve themselves in a closed loop cycle. LLM generated code was the right tool for my job today; doesn't mean it's the right tool for everyone's job or that it always will be. One thing constant in this industry is change. Sold as revolutionary, which is the truth, in the sense of going in circles/cycles.
Also, they've been trained on common practices more than they've been trained on best practices. And best practice is heavily context dependent anyways.