Comment by menaerus
3 months ago
You can choose to be somewhat ignorant of the current state in AI, about which I could also agree that at certain moments it appears totally overhyped, but the reality is that there hasn't been a bigger technology breakthrough probably in the last ~30 years.
This is not "just" machine learning because we have never been able to do things which we are today and this is not only the result of better hardware. Better hardware is actually a byproduct. Why build a PFLOPS GPU when there is nothing that can utilize it?
If you spare yourself some time and read through the actual (scientific) papers of multiple generations of LLM models, the first one being from Google ~~not DeepMind~~ in 2017, you might get to understand that this is no fluff.
And I'm speaking this from a position of a software engineer, without bias.
The reason why all this really took off with so much hi-speed is because of the not quite expected results - early LLM experiments have shown that "knowledge" with current transformers architecture can linearly scale with regards to the amount of compute and training time etc. That was very unexpected and to this day scientists do not have an answer why this even works.
So, after reading bunch of material I am inclined to think that this is something different. The future of loading the codebase into the model and asking the model to explain me the code or fix bugs has never been so close and realistic. For the better or worse.
This line of thinking doesn't really correspond to the reason Transformers were developed in the first place, which was to better utilize how GPUs do computation. RNNs were too slow to train at scale because you had to sequentially compute the time steps, Transformers (with masking) can run the input through in a single pass.
It is worth noting that the first "LLM" you referring to was only 300M parameters, but even then the amount of training required (at the time) was such that training a model like that outside of a big tech company was infeasible. Obviously now we have models that are in the hundreds of billions / trillions of parameters. The ability to train these models is directly a result of better / more hardware being applied to the problem as well as the Transformer architecture specifically designed to better conform with parallel computation at scale.
The first GPT model came out ~ 8 years ago. I recall when GPT-2 came out they initially didn't want to release the weights out of concern for what the model could be used for, looking back now that's kind of amusing. However, fundamentally, all these models are the same setup as what was used then, decoder based Transformers. They are just substantially larger, trained on substantially more data, trained with substantially more hardware.
What line of thinking you're referring to?
Transformers were aimed to solve the "context" problem and authors, being aware that RNNs don't scale at all neither do they solve that particular problem, had to come up with the algorithm that overcomes both of those issues. It turned out that the self-attention compute-scale was the crucial ingredient to solve the problem, something that RNNs were totally incapable of.
They modeled the algorithm to run on the hardware they had at that time available but hardware developed afterwards was a direct consequence, or how I called it a byproduct, of transformers proving themselves to be able to continuously scale. Had that not be true, we wouldn't have all those iterations of NVidia chips.
So, although one could say that the NVidia chip design is what enabled the transformers success, one could also say that we wouldn't have those chips if transformers didn't prove themselves to be so damn efficient. And I'm inclined to think the latter.
> This is not "just" machine learning because we have never been able to do things which we are today and this is not only the result of better hardware. Better hardware is actually a byproduct. Why build a PFLOPS GPU when there is nothing that can utilize it?
This is the line of thinking I'm referring to.
The "context" problem had already been somewhat solved. The attention mechanism existed prior to Transformers and was specifically used on RNNs. They certainly improved it, but innovation of the architecture was making it computation efficient to train.
I'm not really following your argument. Clearly your acknowledging that it was first the case that with the hardware at the time, researchers demonstrated that simply scaling up training with more data yielded better models. The fact that hardware was then optimized for these for these architectures only reinforces this point.
All the papers discussing scaling laws point to the same thing, simply using more compute and data yields better results.
> this is not only the result of better hardware
Regarding this in particular. A majority of the improvement from GPT-2 and GPT-4 was simply training on a much larger scale. That was enabled by better hardware and lots of it.
1 reply →
> the first one being from DeepMind in 2017
? what paper are you talking about
https://arxiv.org/abs/1706.03762
oh gotcha. maybe pedantic but that is not a deepmind paper
1 reply →