← Back to context

Comment by lcnPylGDnU4H9OF

19 days ago

To their point, there hasn’t been any huge breakthrough in this field since the “attention is all you need” paper. Not really any major improvements to model architecture, as far as I am aware. (Admittedly, this is a new field of study to me.) I believe one hope is to develop better methods for self-supervised learning; I am not sure of the progress there. Most practical improvements have been on the hardware and tooling side (GPUs and, e.g., pytorch).

Don’t get me wrong: the current models are already powerful and useful. However, there is still a lot of reason to remain skeptical of an imminent explosion in intelligence from these models.

You’re totally right that there hasn’t been a fundamental architectural leap like “attention is all you need”, that was a generational shift. But I’d argue that what we’ve seen since is a compounding of scale, optimization, and integration that’s changed the practical capabilities quite dramatically, even if it doesn’t look flashy in an academic sense. The models are qualitatively different at the frontier, more steerable, more multimodal, and increasingly able to reason across context. It might not feel like a revolution on paper, but the impact in real-world workflows is adding up quickly. Perhaps all of that can be put in the bucket of “tooling” but from my perspective there has still been quite large leaps looking at cost differences alone.

For some reason my pessimism meter goes off when I see single sentence arguments “change has been slow”. Thanks for brining the conversation back.

  • I'm all for flashy in academic sense, because we can let engineers sort out the practical aspects, especially by combining flashy academic approach. The flaw from LLM architecture can be predicted from the original paper, no amount of engineering can compensate that.