← Back to context

Comment by dathinab

1 year ago

probably both

The size some of the large language models have is just insane. To put it into context GPT3 is a Large Language Model (LLM) but GPT4 has like (handwaveing) 5+ times as many parameters. This means the energy cost is also at least that much larger for inference.

And if we look at training instead of inference it's a quite a bit more complicated (each iteration is more costly, more parameter can also require more iterations, but at the same time you might need to spend less time in an area of increasingly diminishing returns per iteration), but if we go with GPT3 vs. GPT4 the 5x+ increase of parameters lead to a 10x+ increase in training cost (of which a huge part is energy cost, through also amortized hardware cost; ~10M$ to >100M$).

Additionally there are various analysis steps you might do when creating new models which can be hugely (energy) costly.

And that is with GPT4 and with OpenAI any major increase in version seem to come with a major increase in parameter size so with that trend we are looking at energy bills in the (potential many) hundreds of million US dollar for training alone.

Another thing wrt. inference cost is that with my limited understanding currently the best way to reach AGI and also a lot of other tasks is to run multiple models in tandem. Through this model might be domain adoptions of the same model, so not twice the training cost.

---

Domain Adaption == take a trained model and then train it a bit more to "specialize" it on a specific task, potentially adding additional knowledge etc. While you can try to do so just with prompt engineering there a larger prompt comes at a larger cost and a higher chance for unexpected failure, so by in a certain way "burning in" some additional behavior, knowledge etc. you can get nice benefit.