Comment by insane_dreamer

2 months ago

it does not use "some" resources

it uses a fuck ton of resources[0]

and instead of reducing energy production and emissions we will now be increasing them, which, given current climate prediction models, is in fact "killing the planet"

[0] https://www.iea.org/reports/energy-and-ai/energy-supply-for-...

Data centers account for roughly ~1% of global electricity demand and ~.5% of CO2 emissions, as per your link. That's for data centers as a whole, as IEA and some other orgs group "data-centres, AI, and cryptocurrency" as a single aggregate unit. Alone, AI accounts for roughly ~10-14% of a given data center's total energy. Cloud deployments make up ~54%, traditional compute around ~35%.

The fact is that AI, by any definable metric, is only a sliver of the global energy supply right now. Outside the social media hype, what actual climate scientists and orgs talk about isn't (mostly) what AI is consuming now, it's what the picture looks like within the next decade. THAT is the real horror show if we don't pull policy levers. Anyone who says that AI energy consumption is "killing the planet" is either intentionally misleading the argument or unbelievably misinformed. What's actually, factually "killing the planet" are energy/power, heavy industry (steel, cement, chemicals), transport, and agriculture/land use. AI consumption is a rounding error compared to these. We'll ignore the fact AI is actually being used to manage DC energy efficiency and has reduced the energy consumption at some hyperscale DC's (Amazon, AliBaba, Alphabet, Microsoft) by up to 40%, making it one of the only industry sectors that has a real, non-trivial chance at net-zero if deployed at scale.

The most interesting thing about this whole paradigm is just how deep of a grasp AI (specifically LLMs) have on the collective social gullet. It's like nothing I've ever been a part of. When Deep Water Horizon blew up and spilled 210M gallons of crude into the Gulf of Mexico, people (rightfully so) got pissed at BP and Transocean.

Nobody, from what I remember, got angry at the actual, physical metal structure.

  • > what actual climate scientists and orgs talk about isn't (mostly) what AI is consuming now, it's what the picture looks like within the next decade

    that's the point - obviously the planet is not dying _today_, but at the rate at which we are not decreasing emissions, we will kill it. So no, "killing the planet" is not misinformed or misleading.

    > Nobody, from what I remember, got angry at the actual, physical metal structure.

    Nobody's mad at LLMs either. It's the companies that control them and that are fueling the AI "arms race", that are the problem.

    • >So no, "killing the planet" is not misinformed or misleading.

      When we talk as if a few years of AI build‑out are “killing the planet” while long‑standing sectors that make up double‑digit shares of global emissions are treated as the natural background, we’re not doing climate politics, we’re doing scapegoating. The numbers just don’t support that narrative.

      The IEA and others are clear: the trajectory is worrying (data‑center demand doubling, AI the main driver), but present‑day AI still accounts for a single‑digit percent of electricity, not a primary causal driver.

      >Nobody's mad at LLMs either. It's the companies that control them and that are fueling the AI "arms race", that are the problem.

      That’s what people say, yet when asked or given the opportunity, the literature shows they’re perfectly willing to “harm” and “punish” LLMs and social robots.

      Corporations are absolutely the primary locus of power and responsibility (read: root of all evil) here, none of this denies AI’s energy risks, social harms, or the likelihood that deployments will push more people into precarity (read: homelessness) in 2026. The point is about where the anger actually lands in practice.

      Even when it’s narratively framed as being “about” companies and climate policy, that anger is increasingly channeled through interactions with the models themselves. People insult them, threaten them, talk about “punishing” them, and argue over what they “deserve”, that's not "Nobody being mad at the LLMs", that's treating something as a socially legible agent.

      So people can say they’re not mad at AI models, but their behavior tells a very different story.

      TL;DR: Between those who think LLMs have “inner lights” and feelings and deserve moral patient‑hood, and those who insist they’re just “stochastic parrots” that are “killing the planet,” both camps have already installed them as socially legible agents and treat them accordingly. As AI “relationships” grow, so do hate‑filled interactions framed in terms of “harm,” abuse, and “punishment” directed at the systems/models themselves.

      [https://www.frontiersin.org/journals/robotics-and-ai/article...]

      [https://pmc.ncbi.nlm.nih.gov/articles/PMC9951994/]

      [https://www.frontiersin.org/journals/robotics-and-ai/article...]

      [https://www.sciencedirect.com/science/article/pii/S001002772...]

      [https://fortune.com/2025/10/30/being-mean-to-chatgpt-can-boo...]

This, and the insane amount of resources (energy and materials) to build the disposable hardware. And all the waste it's producing.

Simon,

> I find Claude Code personally useful and aim to help people understand why that is.

No offense, but we don't need your help really. You went on a mission to teach people to use LLMs, I don't know why you would feel the urge but it's not too late to quit doing this, and even teach them not to and why.

  • Given everything I've learned over the last ~3 years I think encouraging professional programmers (and increasingly other knowledge workers) not to learn AI tools would be genuinely unethical.

    Like being an accountant in 1985 who learns to use Lotus-123 and then tells their peers that they should actively avoid getting a PC because this "spreadsheet" thing will all blow over pretty soon.

    • I agree that if you're going to do coding, using LLMs will become as commonplace as using a text editor, and it's valuable to help people upskill.

      And as much as I find CC useful in my own work, I'm unhappy because I believe that AI -- actually not AI itself, which has its place, but the race to use AI to enrich corporations by replacing human labor, and to control what will become the most powerful tool ever known for informing, entertaining, monitoring, and controlling, the human race -- is very much a net negative for humanity and even our planet.