Comment by xyhopguy
2 hours ago
not really. early deep learning models were run on single consumer-grade GPUs. the inflection occured _right_ when parallel computing became fast enough to do backprop in a reasonable amount of time with performance better than tree methods.
at that time all the compute resources in the world would not have been enough to train the models from even the last ~6 years or so, probably more.
No comments yet
Contribute on Hacker News ↗