Comment by crystal_revenge
1 month ago
> if even half the people who call themselves "AI Engineers" would read the research in the field, we'd have a lot less hype and a lot more success in finding the actual useful applications of this technology
As someone working in the area for a few years now (both on the product and research side), I strongly disagree. A shocking number of papers in this area are just flat out wrong. Universities/Research teams are churning out garbage with catchy titles at such a tremendous rate that reading all of these papers will likely leave one understanding less than if they read none.
The papers in this list are decent, but I wouldn't be shocked if the conclusions of a good number of them were ultimately either radically altered or outright inverted as we learn more about what's actually happening in LLMs.
The best AI engineers I've worked with are just out there experimenting and building stuff. A good AI engineer definitely has to be working closely to the model, if you're just calling an API you're not really an "AI Engineer" in my book. While most good AI engineers have likely accidentally read most of these paper through the course of their day job, they tend to be reading them with skepticism.
A great demonstration of this is the Stable Diffusion community. Hardly any of the innovation in that space is even properly documented (this, of course, is not ideal), much less used for flag planting on arXiv. But nonetheless the generative image AI scene is exploding in creativity, novel applications, and shocking improvements all with far less engineering/research resources devoted to the task than their peers in the LLM world.
Couldn't agree more with you. At the end of the day the people building the most successful products, are too be busy to be formalizing their experiments into research papers. While I have respect for academic researchers, I think their perspective is fundamentally very limited when it comes to AI engineering. The space is just too frothy.