Steam reached a new peak of 42 million concurrent players today [1]. An average/mid-tier gaming PC uses 0.2 kWh per hour [2]. 42 million * 0.2 gives 8,400,000 kWh per hour, or 8,400 MWh per hour.
By contrast, training GPT3 was estimated to have used 1,300 MWh of energy [3].
This does not account for training costs of newer models, nor inference costs. But we know inference costs are extraordinarily inexpensive and energy efficient [2]. The lowest estimate of energy cost for 1 hour of Steam's peak concurrent player count uses 6.5x more energy than all of the energy that went into training GPT3.
I was skeptical of the LLM energy use claim. I went looking for numbers on energy usage in a domain that most people do not worry about or actively perceive as a net negative. Gaming is a very big industry ($197 billion in 2025 [1], compare to the $252 billion in private AI investment for 2025 [2]) and mostly runs on the same hardware as LLMs. So it's a good gut check.
I have not seen evidence that LLM energy usage is out of control. It appears to be much less than gaming. But please feel free to provide sources that demonstrate this lie.
The question is whether claims of AI energy use have sustenance, or if there are other industries that should be more concerning. People are either truly concerned about the cost of energy or it's a misplaced excuse to reinforce their negative opinions.
I see no point in making this a numbers game. (Like, I was supposed to say "five" or something?)
Let's make it more of a category thing: when AI shows itself responsible for a new category of life-saving technique, like a cure for cancer or Alzheimer's, then I'd have to reconsider.
(And even then, it will be balanced against rising sea levels, extinctions, and other energy use effects.)
Local LLMs that you can run on consumer hardware don't really do anything though. They are amusing, maybe you could use them for basic text search, but they don't have any real knowledge like the hosted ones do.
Gemma 3 27B, some smaller models in the 8-16B size range, and up to 32B can be run on hardware that fits in the "consumer" bracket. RAM is more expensive now, but most people can afford a machine with 32GB and maybe a small graphics card.
Small models don't have as much world knowledge as very large models (proprietary or open source ones), but it's not always needed. They still can do a lot of stuff. OCR and image captioning, tagging, following well-defined instructions, general chat, some coding, are all things local models do pretty well.
Quick napkin math time!
Steam reached a new peak of 42 million concurrent players today [1]. An average/mid-tier gaming PC uses 0.2 kWh per hour [2]. 42 million * 0.2 gives 8,400,000 kWh per hour, or 8,400 MWh per hour.
By contrast, training GPT3 was estimated to have used 1,300 MWh of energy [3].
This does not account for training costs of newer models, nor inference costs. But we know inference costs are extraordinarily inexpensive and energy efficient [2]. The lowest estimate of energy cost for 1 hour of Steam's peak concurrent player count uses 6.5x more energy than all of the energy that went into training GPT3.
[1]: https://www.gamespot.com/articles/steam-has-already-set-a-ne...
[2]: https://jamescunliffe.co.uk/is-gen-ai-bad-for-the-environmen...
[3]: https://www.theverge.com/24066646/ai-electricity-energy-watt...
I'd rather people play games, even extremely mediocre ones, than generate ai slop images or code.
I don't have a preference. Both are valuable in their own way.
it's very weird to compare LLM training with a subset of gamers.
Who lied to you and told you this was some kind of saving gotcha??
Come again?
I was skeptical of the LLM energy use claim. I went looking for numbers on energy usage in a domain that most people do not worry about or actively perceive as a net negative. Gaming is a very big industry ($197 billion in 2025 [1], compare to the $252 billion in private AI investment for 2025 [2]) and mostly runs on the same hardware as LLMs. So it's a good gut check.
I have not seen evidence that LLM energy usage is out of control. It appears to be much less than gaming. But please feel free to provide sources that demonstrate this lie.
The question is whether claims of AI energy use have sustenance, or if there are other industries that should be more concerning. People are either truly concerned about the cost of energy or it's a misplaced excuse to reinforce their negative opinions.
[1]: https://gameworldobserver.com/2025/12/23/the-gaming-industry...
[2]: https://hai.stanford.edu/ai-index/2025-ai-index-report/econo...
How many lives would AI have to save for you to say the energy cost is worth it?
I see no point in making this a numbers game. (Like, I was supposed to say "five" or something?)
Let's make it more of a category thing: when AI shows itself responsible for a new category of life-saving technique, like a cure for cancer or Alzheimer's, then I'd have to reconsider.
(And even then, it will be balanced against rising sea levels, extinctions, and other energy use effects.)
> when AI shows itself responsible for a new category of life-saving technique, like a cure for cancer or Alzheimer's, then I'd have to reconsider.
We’re way past that
3 replies →
How many lives have been saved by AI? How many lives have been lost because of it?
Not what I’m asking. But idk, do you have stats? I wouldn’t say _lost_ as a ding against, _ruined_ or _negatively impacted_ is sufficiently a problem
Far less than you'd think for local LLMs.
Local LLMs that you can run on consumer hardware don't really do anything though. They are amusing, maybe you could use them for basic text search, but they don't have any real knowledge like the hosted ones do.
Gemma 3 27B, some smaller models in the 8-16B size range, and up to 32B can be run on hardware that fits in the "consumer" bracket. RAM is more expensive now, but most people can afford a machine with 32GB and maybe a small graphics card.
Small models don't have as much world knowledge as very large models (proprietary or open source ones), but it's not always needed. They still can do a lot of stuff. OCR and image captioning, tagging, following well-defined instructions, general chat, some coding, are all things local models do pretty well.
Edit: fixed unnecessarily abrasive wording