Comment by subroutine
17 hours ago
Are you asking if the 10 seconds it takes AI to generate an image is more costly to the environment than a commissioned graphics artist using a laptop for 5-6 hours, or a painter who uses physical media sourced from all over the world?
In short, yes.
A modern laptop is running almost fanless, like a 486 from the days of yore.
A single H200 pumps out 700W continuously in a data center, and you run thousands of them.
Also, don't forget the training and fine tuning runs required for the models.
Mass transportation / global logistics can be very efficient and cheap.
Before the pandemic, it was cheaper to import fresh tomatoes from half-world away rather than growing them locally in some cases. A single container of painting supplies is nothing in the grand scheme of things, esp. when compared with what data centers are consuming and emitting.
This argument is so flawed that its conclusion almost loops back around to being correct again:
No, in terms of unit economics, I'm almost certain that the painting supplies have a bigger ecological/resource footprint than an LLM per icon generated, and I'm pretty sure the cost of shipping tomatoes does not decrease that footprint, even if it possibly dwarfs it.
But yes, due to Jevon's paradox, the total resource use might well increase despite all that. I, for example, would have never commissioned a professional icon for my silly little iOS shortcuts on my homescreen, so my silly icon related carbon footprint went from exactly zero to slightly above that.
This is a plainly dishonest comparison. A single H200 does not need to run continuously for you to generate a dozen pictures. And then you immediately pivot to comparing the paint usage against "the grand scheme of things"- 700W is nothing in the grand scheme of things.
In fact it's pretty fair.
Many people think that when a piece of hardware is idle, its power consumption becomes irrelevant, and that's true for home appliances and personal computers.
However, the picture is pretty different for datacenter hardware.
Looking now, an idle V100 (I don't have an idle H200 at hand) uses 40 watts, at minimum. That's more than TDP of many, modern consumer laptops and systems. A MacBook Air uses 35W power supply to charge itself, and it charges pretty quickly even if it's under relatively high stress.
I want to clarify some more things. A modern GPU server houses 4-8 high end GPUs. This means 3KW to 5KW of maximum energy consumption per server. A single rack goes well around 75KW-100KW, and you house hundreds of these racks. So, we're talking about megawatts of energy consumption. CERN's main power line on the Swiss side had a capacity around 10MW, to put things in perspective.
Let's assume an H200 uses 60W energy when it's idle. This means ~500W of wasted energy per server for sitting around. If a complete rack is idle, it's 10KW. So you're wasting energy consumption of 3-5 houses just by sitting and doing nothing.
This computation only thinks about the GPU. Server hardware also adds around 40% to these numbers. Go figure. This is wasting a lot for cat pictures.
And, these "small" numbers add up to a lot.
4 replies →
these are unfair comparisons. it's not just a single laptop running all day it's all the graphic designer laptops that get replaced. it's not a single container of painting supplies it's all off them, (which are toxic by the way).
so if power were plentiful and environmental you'd be onboard with it?
> these are unfair comparisons. it's not just a single laptop running all day it's all the graphic designer laptops that get replaced. it's not a single container of painting supplies it's all off them, (which are toxic by the way).
Please see my other comment about energy consumption and connect the dots with how open loop DLC systems are harmful to fresh water supplies (which is another comment of mine).
> so if power were plentiful and environmental you'd be onboard with it?
This is a pretty loaded way to ask this. Let me put this straight. I'm not against AI. I'm against how this thing is built. Namely:
I work in HPC. I support AI workloads and projects, but the projects we tackle have real benefits, like ecosystem monitoring, long term climate science, water level warning and prediction systems, etc. which have real tangible benefits for the future of the humanity. Moreover, there are other projects trying to minimize environmental impact of computation which we're part of.
So it's pretty nuanced, and the AI iceberg goes well below OpenAI/Anthropic/Mistral trio.
1 reply →
Cheaper/faster tech increases overall consumption though. Without the friction of commissioning a graphics artist to design something, a user can generate thousands of images (and iterate on those images multiple times to achieve what they want), resulting in way more images overall.
I'm not really well versed on the environmental cost, more just (neutrally) pointing out that comparing a single 10s image to a 5-6 hour commission ignores the fact that the majority of these images probably would never have existed in the first place without AI.
Also, ignoring training when talking about the environmental costs is bad faith. Without training this image would not exist, and if nobody generating images like these, the training would not happen. So we should really ask, the 10 seconds it took for inference, plus the weeks or months of high intensity compute it took to train the model.
You'd want to compare against the fraction of training attributable to the image
1 reply →
Wow, do you hold a degree in false dichotomies?