Comment by subdavis
2 months ago
I truly don’t understand this tendency among tech workers.
We were contributing to natural resource destruction in exchange for salary and GDP growth before GenAI, and we’re doing the same after. The idea that this has somehow 10x’d resource consumption or emissions or anything is incorrect. Every single work trip that requires you to get on a plane is many orders of magnitude more harmful.
We’ve been compromising on those morals for our whole career. The needle moved just a little bit, and suddenly everyone’s harm thresholds have been crossed?
They expect you to use GenAI just like they expected accountants to learn Excel when it came out. This is the job, it has always been the job.
I’m not an AI apologist. I avoid it for many things. I just find this sudden moral outrage by tech workers to be quite intellectually lazy and revisionist about what it is we were all doing just a few years ago.
The problem is that its reached a tipping point. Comparing Excel to GenAI is just bad faith.
Are you not reading the writing on the wall? These things have been going on for a long time and final people are starting to wake up that it needs to stop. You cant treat people in inhumane ways without eventual backlash.
Two things:
1. Many tech workers viewed the software they worked on in the past as useful in some way for society, and thus worth the many costs you outline. Many of them don't feel that LLMs deliver the same amount of utility, and so they feel it isn't worth the cost. Not to mention, previous technologies usually didn't involve training a robot on all of humanity's work without consent.
2. I'm not sure the premise that it's just another tool of the trade for one to learn is shared by others. One can alternatively view LLMs as automated factory lines are viewed in relation to manual laborers, not as Excel sheets were to paper tables. This is a different kind of relationship, one that suggests wide replacement rather than augmentation (with relatively stable hiring counts).
In particular, I think (2) is actually the stronger of the reasons tech workers react negatively. Whether it will ultimately be justified or not, if you believe you are being asked to effectively replace yourself, you shouldn't be happy about it. Artisanal craftsmen weren't typically the ones also building the automated factory lines that would come to replace them (at least to my knowledge).
I agree that no one really has the right to act morally superior in this context, but we should also acknowledge that the material circumstances, consequences, and effects are in fact different in this case. Flattening everything into an equivalence is just as intellectually sloppy as pretending everything is completely novel.
I'm not sure either (1) or (2) are the problem.
I can understand someone telling me I'm an old man shouting at clouds if (2) works out.
But at least (2) is about a machine saving someone's time (we don't know at what cost, and for who's benefit).
My biggest problem with LLMs (and the email Rob got is an example) is when they waste people's time.
Like maintainers getting shit vibe coded PRs to review, and when we react badly, “oh you're one of those old schoolers who have a policy against AI.”
No kid, I don't have an AI policy, just as I don't have an IDE policy. Use whatever the hell you want – just spare me the slop.
> Many tech workers viewed the software they worked on in the past as useful in some way for society
Ah yes, crypto, Facebook, privacy destruction etc. Indeed, they made world such a nice place!
Copyright was an evil institution to protect corporate profits until people without any art background started being able to tap AI to generate their ideas.
Copyright did evolve to protect corporations. Most of the value from a piece of IP is extracted within first 5-10 years, why we have "author's life + a bunch of years" length on it?. Because it no longer is about making sure author can live off their IP, it's for corporations to be able to hire some artists for pennies (compared to value they produce for company) and leech off that for decades
So let us compare AI to aviation. Globally aviation accounts for approximately 830 million tons of CO₂ emission per year [1]. If you power your data centre with quality gas power plants you will emit 450g of CO₂ per kWh electricity consumed [2], that is 3.9 million tons per year for a GW data centre. So depending on power mix it will take somewhere around 200 GW of data centres for AI to "catch up" to aviation. I have a hard time finding any numbers on current consumption, but if you believe what the AI folks are saying we will get there soon enough [3].
As for what your individual prompts contribute, it is impossible to get good numbers, and it will obviously vary wildly between types of prompts, choice of model and number of prompts. But I am fairly certain that someone whose job is prompting all day will generally spend several plane trips worth of CO₂.
Now, if this new tool allowed us to do amazing new things, there might be a reasonable argument that it is worth some CO₂. But when you are a programmer and management demands AI use so that you end up doing a worse job, while having worse job satisfaction, and spending extra resources, it is just a Kinder egg of bad.
[1] https://ourworldindata.org/grapher/annual-co-emissions-from-... [2] https://en.wikipedia.org/wiki/Gas-fired_power_plant [3] https://www.datacenterdynamics.com/en/news/anthropic-us-ai-n...
> But I am fairly certain that someone whose job is prompting all day will generally spend several plane trips worth of CO₂.
I dont know about gigawatts needed for future training, but this sentence about comparing prompts with plane trips looks wrong. Even making a prompt every second for 24h amounts only for 2.6 kg CO2 on some average Google LLM evaluated here [1]. Meanwhile typical flight emissions are 250 kg per passenger per hour [2]. So it must be parallelization to 100 or so agents prompting once a second to match this, which is quite a serious scale.
[1] https://cloud.google.com/blog/products/infrastructure/measur...
[2] https://www.carbonindependent.org/22.html
Lots of things to consider here, but mostly that is not the kind of prompt you would use for coding. Serious vibe coders will ingest an entire codebase into the model, and then use some system that automates iterating.
Basic "ask a question" prompts indeed probably do not cost all that much, but they are also not particularly relevant in any heavy professional use.
3 replies →
When they stopped measuring compute in TFLOPS (or any deterministic compute metric) and started using Gigawatts instead, you know we're heading in the wrong direction.
https://nvidianews.nvidia.com/news/openai-and-nvidia-announc...
> But I am fairly certain that someone whose job is prompting all day will generally spend several plane trips worth of CO₂.
I'm fairly certain that your math on this is orders of magnitude off unless you define "prompting all day" in a very non-standard way yet aren't doing so for plane trips, and that 99% of people who "prompt all day" don't even amount to 0.1 plane trip per year.
OpenAI's AI data centers will consume as much electricity as the entire nation of India by 2033 if they hit their internal targets[0].
No, this is not the same.
[0]: https://www.tomshardware.com/tech-industry/artificial-intell...
That’s interesting. Why do you think this is worth taking more seriously than Musks repeated projections for Mars colonies over the last decade? We were supposed to have one several times over by this point.
Because we know how much power it's actually going to take? Because OpenAI is buying enough fab capacity and silicon to spike the cost of RAM 3x in a month? Because my fucking power bill doubled in the last year?
Those are all real things happening. Not at all comparable to Muskan Vaporware.
> We’ve been compromising on those morals for our whole career
Yes!
> The needle moved just a little bit
That's where we disagree.
I suspect people talk about natural resource usage because it sounds more neutral than what I think most people are truly upset about -- using technology to transfer more wealth to the elite while making workers irrelevant. It just sounds more noble to talk about the planet instead, but honestly I think talking about how bad this could be for most people is completely valid. I think the silver lining is that the LLM scaling skeptics appear to be correct -- hyperscaling these things is not going to usher in the (rather dystopian looking) future that some of these nutcases are begging for.
Let's be careful here. It's generally a good idea to congratulate people for changing their opinion based on evolving information, rather than lambast them.
(Not a tech worker, don't have a horse in this race)
They aren’t changing their opinion though. They aren’t seeking to scale back non-AI tech.
> The needle moved just a little bit, and suddenly everyone’s harm thresholds have been crossed?
Its similar to the Trust Thermocline. There's always been concern about whether we were doing more harm than good (there's a reason jokes about the Torment Nexus were so popular in tech). But recent changes have made things seem more dire and broken through the Harm Thermocline, or whatever you want to call it.
Edit: There's also a "Trust Thermocline" element at play here too. We tech workers were never under the illusion that the people running our companies were good people, but there was always some sort of nod to greater responsibility beyond the bottom line. Then Trump got elected and there was a mad dash to kiss the ring. And it was done with an air of "Whew, now we don't have to even pretend anymore!" See Zuckerberg on the right-wing media circuit. And those same CEOs started talking breathlessly about how soon they wouldn't have to pay us, because its super unfair that they have to give employees competitive wages. There are degrees of evil, and the tech CEOs just ripped the mask right off. And then we turn around and a lot of our coworkers are going "FUCK YEAH!" at this whole scenario. So yeah, while a lot of us had doubts before, we thought that maybe there was enough sense of responsibility to avoid the worse, but it turns out our profession really is excited for the Torment Nexus. The Trust Thermocline is broken.
Well said. AI makes people feel icky, that’s the actual problem. Everything else is post rationalisation they add because they already feel gross about it. Feeling icky about it isn’t necessarily invalid, but it’s important for us to understand why we actually like or dislike something so we can focus on any solutions.
> AI makes people feel icky
Yes!
> it’s important for us to understand why we actually like or dislike something
Yes!
The primary reason we hate AI with a passion is that the companies behind it intentionally keep blurring the (now) super-sharp boundary between language use and thinking (and feeling). They actively exploit the -- natural, evolved -- inability of most people on Earth to distinguish language use from thinking and feeling. For the first time in the history of the human race, "talks entirely like a human" does not mean at all that it's a human. And instead of disabusing users from this -- natural, evolved, understandable -- mistake, these fucking companies double down on the delusion -- because it's addictive for users, and profitable for the companies.
The reason people feel icky about AI is that it talks like a human, but it's not human. No more explanation or rationalization is needed.
> so we can focus on any solutions
Sure; let's force all these companies by law to tune their models to sound distinctly non-human. Also enact strict laws that all AI-assisted output be conspicuously labeled as such. Do you think that will happen?
I believe that’s the main reason why you dislike AI, but I believe if you asked everyone who hated AI many would come up with different main reasons why they dislike it. I doubt that solution would work very well, even though it’s well intentioned. It’s too easy to work around it, especially with text. But at least it’s direct, as really my main point is we need to sidestep the emotional feelings we have about AI and actually present cold hard legal or moral arguments where they exist with specific changes requested or be dismissed as just hating it emotionally.
> They actively exploit the -- natural, evolved -- inability of most people on Earth to distinguish language use from thinking and feeling
Maybe this will force humans to raise their game, and start to exercise discrimination. Maybe education will change to emphasis this more. Ability to discern sense from pleasing rhetoric has always been a problem. Every politician and advertizer takes advantage of this. Reams of philosophy have been written on this problem.
> The idea that this has somehow 10x’d resource consumption or emissions or anything is incorrect.
Nvidia to cut gaming GPU production by 30 - 40% starting ...
https://www.reddit.com/r/technology/comments/1poxtrj/nvidia_...
Micron ends Crucial consumer SSD and RAM line, shifts ...
https://www.reddit.com/r/Games/comments/1pdj4mh/micron_ends_...
OpenAI, Oracle, and SoftBank expand Stargate with five new AI data center sites
https://openai.com/index/five-new-stargate-sites/
> Every single work trip that requires you to get on a plane is many orders of magnitude more harmful.
I'm a software developer. I don't take planes for work.
> We’ve been compromising on those morals for our whole career.
So your logic seems to be, it's bad, don't do anything, just floor it?
> I’m not an AI apologist.
Really? Have you just never heard the term "wake up call?"
At least Excel worked a lot better.
> I just find this sudden moral outrage by tech workers to be quite intellectually lazy and revisionist about what it is we were all doing just a few years ago.
You are right, thus downvoted, but still I see current outcry as positive.
I appreciate this and many of the other perspectives I’m encountering in the replies. I agree with you that the current outcry is probably positive, so I’m a little disappointed in how I framed my earlier comment. It was more contrarian than necessary.
We tech workers have mostly been villains for a long time, and foot stomping about AI does not absolve us of all of the decades of complicity in each new wave of bullshit.
That's fine, you do you. Everyone gets to choose for themselves!
It still feels like you haven’t absorbed their absolutely valid point that you may be hating first and coming up with rationalisations afterwards. There’s a more rational way to tackle this.
Do people really need to be more rational about this than AI itself?
Or has the bar been lowered in such a way that makes different people regard it as unsavory in different ways that wouldn't happen if everyone was more rational across-the-board?
2 replies →
[flagged]
Are the intentions of the AI creators icky though? The ick didn't come from nowhere.
1 reply →
> tech workers to be quite intellectually lazy and revisionist
i have yet to meet a single tech worker that isn't so
[dead]