Slacker News Slacker News logo featuring a lazy sloth with a folded newspaper hat
  • top
  • new
  • show
  • ask
  • jobs
Library
← Back to context

Comment by wizzard0

3 years ago

my tldr: this explains

1) why huge models are important (so the gradient is high-dimensional enough to be monotonic)

2) why attention (aka connections, aka indirections) is trainable at all;

and says nothing about why they might generalize the dataset

1 comment

wizzard0

Reply

Slacker News

Product

  • API Reference
  • Hacker News RSS
  • Source on GitHub

Community

  • Support Ukraine
  • Equal Justice Initiative
  • GiveWell Charities