Comment by cbm-vic-20
4 hours ago
I've never been Wolfram's biggest fan, but this is a solid article. I'm trying to get a deeper understanding of the transformer architecture, and it seems that the written articles on transformer are bimodal: the either blind you with the raw math, or handwave the complexity away. I have been trying to figure out why the input embedding matrix is simply added to the input position matrix before the encoding stage, as opposed to some other way of combining these. Wolfram says:
> Why does one just add the token-value and token-position embedding vectors together? I don’t think there’s any particular science to this. It’s just that various different things have been tried, and this is one that seems to work. And it’s part of the lore of neural nets that—in some sense—so long as the setup one has is “roughly right” it’s usually possible to home in on details just by doing sufficient training, without ever really needing to “understand at an engineering level” quite how the neural net has ended up configuring itself.
It's the lack of "understand[ing] at an engineering level" that irks me- that this emergent behavior is discovered, rather than designed.
No comments yet
Contribute on Hacker News ↗