Comment by yarri
7 months ago
Not sure what “official” means but would direct you to the GCP MaxText [0] framework which is not what this GDM paper is referring to but rather this repo contains various attention implementations in MaxText/layers/attentions.py
No comments yet
Contribute on Hacker News ↗