Comment by jraph

2 months ago

> I totally understand his rage.

Do you really? What follows makes me doubt it a bit.

> Thank you notes from AI systems can’t possibly feel meaningful,

Indeed, but that's quite minor.

> So I had Claude Code do the rest of the investigation:

Can't you see it? That would likely be a huge facepalm from rob pike here!

He writes more or less "fuck you people with your planet killing AI horror machine", and here you are, "what happened? I asked a planet killing horror machine (the same one btw) and...". No. Really. The bigger issue is not the email, or even the initiative behind, which is terrible, but just a symptom. And this:

> Don’t unleash agents on the world like this

> I don’t like this at all.

You're not wrong, but the cynic in me reads this as: " don't do this, it makes AI, which I love, look bad". Absolutely uncharitable view, I know, but really, the meaningless email is infuriating but hardly the important part.

This makes the post feel pretty myopic to me. You are spending your time on a minor symptom, you don't touch what fundamentally annoys rob pike (the planet killing part), and worse, you engaged in exactly what rob pike has just strongly rejected. You may not have and it may be you deliberately avoided touching the substance of robe pike's complaint because you disagree with it, but it feels like you missed the point. I would be in rob pike's position, it's possible I would feel infuriated by your article because through my anti ai message, I would have hated triggering even more AI use.

“AI is killing the planet” is basically made up. It’s not. Not even slightly. Like all industries, it uses some resources, but this is not a bad thing.

People who are mad about AI just reach for the environmental argument to try to get the moral highground.

  • it does not use "some" resources

    it uses a fuck ton of resources[0]

    and instead of reducing energy production and emissions we will now be increasing them, which, given current climate prediction models, is in fact "killing the planet"

    [0] https://www.iea.org/reports/energy-and-ai/energy-supply-for-...

    • Data centers account for roughly ~1% of global electricity demand and ~.5% of CO2 emissions, as per your link. That's for data centers as a whole, as IEA and some other orgs group "data-centres, AI, and cryptocurrency" as a single aggregate unit. Alone, AI accounts for roughly ~10-14% of a given data center's total energy. Cloud deployments make up ~54%, traditional compute around ~35%.

      The fact is that AI, by any definable metric, is only a sliver of the global energy supply right now. Outside the social media hype, what actual climate scientists and orgs talk about isn't (mostly) what AI is consuming now, it's what the picture looks like within the next decade. THAT is the real horror show if we don't pull policy levers. Anyone who says that AI energy consumption is "killing the planet" is either intentionally misleading the argument or unbelievably misinformed. What's actually, factually "killing the planet" are energy/power, heavy industry (steel, cement, chemicals), transport, and agriculture/land use. AI consumption is a rounding error compared to these. We'll ignore the fact AI is actually being used to manage DC energy efficiency and has reduced the energy consumption at some hyperscale DC's (Amazon, AliBaba, Alphabet, Microsoft) by up to 40%, making it one of the only industry sectors that has a real, non-trivial chance at net-zero if deployed at scale.

      The most interesting thing about this whole paradigm is just how deep of a grasp AI (specifically LLMs) have on the collective social gullet. It's like nothing I've ever been a part of. When Deep Water Horizon blew up and spilled 210M gallons of crude into the Gulf of Mexico, people (rightfully so) got pissed at BP and Transocean.

      Nobody, from what I remember, got angry at the actual, physical metal structure.

      2 replies →

    • This, and the insane amount of resources (energy and materials) to build the disposable hardware. And all the waste it's producing.

      Simon,

      > I find Claude Code personally useful and aim to help people understand why that is.

      No offense, but we don't need your help really. You went on a mission to teach people to use LLMs, I don't know why you would feel the urge but it's not too late to quit doing this, and even teach them not to and why.

      2 replies →

Two things can be true at once:

1. I think that sending "thank you" emails (or indeed any other form of unsolicited email) from AI is a terrible use of that technology, and should be called out.

2. I find Claude Code personally useful and aim to help people understand why that is. In this case I pulled off a quite complex digital forensics project with it in less than 15 minutes. Without Claude Code I would not have attempted that investigation at all - I have a family dinner to prepare.

I was very aware of the tension involved in using AI tools to investigate a story about unethical AI usage. I made that choice deliberately.

  • > Without Claude Code I would not have attempted that investigation at all - I have a family dinner to prepare.

    Then maybe you shouldn’t have done it at all. It’s not like the world asked or imbued you with the responsibility for that investigation. It’s not like it was imperative to get to the bottom of this and you were the only one able to do it.

    Your defence is analogous to all the worst tech bros who excuse their bad actions with “if we did it right/morally/legally, it wouldn’t be viable”. Then so be it, maybe it shouldn’t be viable.

    You did it because you wanted to. It was for yourself. You saw Pike’s reaction and deliberately chose to be complicit in the use of technology he decried, further adding to his frustration. It was a selfish act.

    • I knew what I was doing. I don't know if I'd describe it as selfish so much as deliberately provocative.

      I agree with Rob Pike that sending emails like that from unreviewed AI systems is extremely rude.

      I don't agree that the entire generative AI ecosystem deserves all of those fuck yous.

      So I hit back in a very subtle way by demonstrating a little-known but extremely effective application of generative AI - for digital forensics. I made sure anyone reading could follow along and see exactly what I did.

      I think this post may be something of a Rorschach test. If you have strong negative feelings about generative AI you're likely to find what I did offensive. If you have favorable feelings towards generative AI you're more likely to appreciate my subtle dig.

      So yes, it was a bit of a dick move. In the overall scheme of bad things humans do I don't feel like it's pretty far over the "this is bad" line.

      5 replies →