← Back to context

Comment by dr_dshiv

8 hours ago

One hour of Claude code— well, I’d guess it would be comparable to an hour of driving an electric car. How to know?

OP says one query uses 0.3 Wh. Driving an electric car for 10 miles = 3,000 Wh which is roughly 10,000 Wh per hour.

I'm not sure how many queries is equivalent to an hour of Claude code use, but maybe 5 seconds, which means an hour of continuous use = 216 Wh, or ~50x less than an electric car.

OP has a longer article about LLM energy usage: https://hannahritchie.substack.com/p/ai-footprint-august-202...

  • Beside the point, but 10,000 Wh per hour is kind of an insane unit. It's 10,000 watts. Or 10 kW if you're really into the whole brevity thing.

It is not only about raw power consumption. Comparing driving an electric car with using AI only in kW hides a major point: Hyperscale datacenters are massively centralised, which brings it's own problems; a lot of energy is used for cooling, and water consumptions is enormous. Charging electric cars at home is distributed and does not suffer from the same problems as the centralised hyperscalers do. Also, running AI models at home is not much different than a gaming session :)

  • This is an incredible sequence of assertions, every single one of which is very incorrect.

    "A lot of energy used for cooling": hyperscale data centers use the least cooling per unit of compute capacity, 2-3x less than small data centers and 10-100x less than a home computer.

    "Water consumption is enormous": America withdraws roughly 300 billion gallons of fresh water daily, of which IT loads are expected to grow to 35-50 billion gallons annually by 2028. Data center water demands are less than a rounding error.

    "distributed and does not suffer from the same problems": technically correct I guess but distributed consumption has its own problems that are arguably more severe than centralized power consumption.