← Back to context

Comment by InsideOutSanta

2 months ago

Everything humans do is harmful to some degree. I don't want to put words in Pike's mouth, but I'm assuming his point is that the cost-benefit-ratio of how LLMs are often used is out of whack.

Somebody burned compute to send him an LLM-generated thank-you note. Everybody involved in this transaction lost, nobody gained anything from it. It's pure destruction of resources.

Google has been burning compute for the past 25 years to shove ads at people. We all lost there, too, but he apparently didn’t mind that.

  • Data center power usage has been fairly flat for the last decade (until 2022 or so). While new capacity has been coming online, efficiency improvements have been keeping up, keeping total usage mostly flat.

    The AI boom has completely changed that. Data center power usage is rocketing upwards now. It is estimated it will be more than 10% of all electric power usage in the US by 2030.

    It's a completely different order of magnitude than the pre AI-boom data center usage.

    Source: https://escholarship.org/uc/item/32d6m0d1

    • This is where the debate gets interesting, but I think both sides are cherrypicking data a bit. The energy consumption trend depends a lot on what baseline you're measuring from and which metrics you prioritize.

      Yes, data center efficiency improved dramatically between 2010-2020, but the absolute scale kept growing. So you're technically both right: efficiency gains kept/unit costs down while total infrastructure expanded. The 2022+ inflection is real though, and its not just about AI training. Inference at scale is the quiet energy hog nobody talks about enough.

      What bugs me about this whole thread is that it's turning into "AI bad" vs "AI defenders," when the real question should be: which AI use cases actually justify this resource spike? Running an LLM to summarize a Slack thread probably doesn't. Using it to accelerate drug discovery or materials science probably does. But we're deploying this stuff everywhere without any kind of cost/benefit filter, and that's the part that feels reckless.

    • "google has been brainwashing us with ads deployed by the most extravagant uses of technology man has ever known since they've ever existed."

      "yeah but they became efficient at it by 2012!"

  • > Google has been burning compute for the past 25 years to shove ads at people. We all lost there, too, but he apparently didn’t mind that.

    How much of that compute was for the ads themselves vs the software useful enough to compel people to look at the ads?

    • Have you dived into the destructive brainrot that YouTube serves to millions of kids who (sadly) use it unattended each day? Even much of Google's non-ad software is a cancer on humanity.

      11 replies →

  • You could at least argue while there is plenty of negatives, at least we got to use many services with ad-supported model.

    There is no upside to vast majority of the AI pushed by the OpenAI and their cronies. It's literally fucking up economy for everyone else all to get AI from "lies to users" to "lies to users confidently", all while rampantly stealing content to do that, because apparently pirating something as a person is terrible crime govt need to chase you, unless you do that to resell it in AI model, then it's propping up US economy.

    • I feel you. All that time in the beginning of the mp3 era the record industry was perusing people for pirating music. And then when an AI company does it for books, its some how not piracy?

      If there is any example of hypocrisy, and that we don't have a justice system that applies the law equally, that would be it.

  • Someone paid for those ads. Someone got value from them.

    • It isn't that simple. Each company paying for ads would have preferred that their competitors had not advertised, then spend a lot less on ads... for the same value.

      It is like an arms race. Everyone would have been better off if people just never went to war, but....

      1 reply →

  • “this other thing is also bad” is not an exoneration

    • > “this other thing is also bad” is not an exoneration

      No, but it puts some perspective on things. IMO Google, after abandoning its early "don't be evil" motto is directly responsible for a significant chunk of the current evil in the developed world, from screen addiction to kids' mental health and social polarization.

      Working for Google and drawing an extravagant salary for many, many years was a choice that does affect the way we perceive other issues being discussed by the same source. To clarify: I am not claiming that Rob is evil; on the contrary. His books and open source work were an inspiration to many, myself included. But I am going to view his opinions on social good and evil through the prism of his personal employment choices. My 2c.

      19 replies →

    • > “this other thing is also bad” is not an exoneration

      Data centers are not another thing when the subject is data centers.

  • The ad system uses a fairly small fraction of resources.

    And before the LLM craze there was a constant focus on efficiency. Web search is (was?) amazingly efficient per query.

  • We weren't facing hardware shortages in the race to shovel ads. Little different.

  • Btw., how do you calculate the toll that ads take on society?

    I mean, buying another pair of sneakers you don't need just because ads made you want them doesn't sound like the best investment from a societal perspective. And I am sure sneakers are not the only product that is being bought, even though nobody really needs them.

  • That's frankly just pure whataboutism. The scale of the situation with the explosion of "AI" data centres is far far higher. And the immediate spike of it, too.

    • It’s not really whataboutism. Would you take an environmentalist seriously if you found out that they drive a Hummer?

      When people have choices and they choose the more harmful action, it hurts their credibility. If Rob cares so much about society and the environment, why did he work at a company that has horrendous track record on both? Someone of his level of talent certainly had choices, and he chose to contribute to the company that abandoned “don’t be evil” a long time ago.

      15 replies →

It's dumb, but energy wise, isn't this similar to leaving the TV on for a few minutes even though nobody is watching it?

Like, the ratio is not too crazy, it's rather the large resource usages that comes from the aggregate of millions of people choosing to use it.

If you assume all of those queries provide no value then obviously that's bad. But presumably there's some net positive value that people get out of that such that they're choosing to use it. And yes, many times the value of those queries to society as a whole is negative... I would hope that it's positive enough though.

> Everything humans do is harmful to some degree.

I find it difficult to express how strongly I disagree with this sentiment.

  • You can make an argument supporting your disagreement.

    • There are two possible forks. The physical fork involves factual disagreement on how much humanity has built vs destroyed, the relative ease of destruction over construction, and an argument that given entropy and other effects, even a slight bias toward production would produce little positive, leading to the conclusion that humans mostly produce vastly more than they consume, even though production is, as mentioned, more difficult.

      The value or "moral" fork would be trying to convince you that building, producing, and growing was actually helpful rather than harmful.

      I don't imagine we actually disagree on the physical fork, making that argument pretty pointless: clearly humans and human civilization are learning, growing, and still have a strong potential to thrive as long as ASI, apathy, or a big rock don't take us out first. Instead, I took your statement as an indication that you don't actually positively value humans, more humans, humans growing, and humans building things. That's a preferences and values disagreement, and there's no way to rationally or logically argue someone into changing their core values. No ought from is, and all that.

      I'm not suggesting, by the way, that people's values don't change, or can't be changed by discussion, only that there is no way to do so with logical argument; reason can get you to your goal, but it can't tell you what ultimate goal to want.

      Anyway, I was expressing that I like humans and want humans (or people who themselves used to be humans, in the limit) to continue and do more, rather than arguing that you ought to feel the same.

      2 replies →

Serving unwanted ads has what cost-benefit-ratio vs serving LLM:s that are wanted by the user?

  • Asking about the value of ads is like asking what value I derive from buying gasoline at the petrol station. None. I derive no value from it, I just spend money there. If given the option between having to buy gas and not having to buy gas, all else being equal, I would never take the first option.

    But I do derive value from owning a car. (Whether a better world exists where my and everyone else's life would be better if I didn't is a definitely a valid conversation to have.)

    The user doesn't derive value from ads, the user derives value from the content on which the ads are served next to.

    • > what value I derive from buying gasoline at the petrol station. None. I derive no value from it, I just spend money there.

      The value you derive is the ability to make your car move. If you derived no value from gas, why would you spend money on it?

      1 reply →

  • Ads are extremely computationally cheap

    • But mining all the tracking data in order to show profitable targeted ads is extremely intensive. That’s what kicked off the era of “big data” 15-20 years ago.

      1 reply →

  • Every content generated by LLM was served to me against my will and without accounting for preferences.

    • The generation of the content was done intentionally though. If they saved the output and you visited their site it wasn’t really generated for you (rather just static content served to you).

  • > LLM:s that are wanted by the user

    If they want LLM, you probably don't have to advertise them as much

    No the reality of the matter is that people are being shoved LLM's. They become the talk of the town and algorithms share any development related to LLM or similar.

    The ads are shoved down to users. Trust me, the average person isn't as much enthusiastic about LLM's and for good reasons when people who have billions of dollars say that yes its a bubble but its all worth it or similar and the instances where the workforce themselves are being replaced/actively talked about being replaced by AI

    We live in an hackernews bubble sometimes of like-minded people or communities but even on hackernews we see disagreements (I am usually Anti AI mostly because of the negative financial impact the bubble is gonna have on the whole world)

    So your point becomes a bit moot in the end but that being said, Google (not sure how it was in the past) and big tech can sometimes actively promote/ close their eyes if the ad sponsors are scammy so ad-blockers are generally really good in that sense.

> Everything humans do is harmful to some degree

That's just not true... When a mother nurses her child and then looks into their eyes and smiles, it takes the utmost in cynical nihilism to claim that is harmful.

  • I could be misinterpreting parent myself, but I didn't bat an eye on the comment because I interpreted it similarly to "everything humans (or anything really) do increases net entropy, which is harmful to some degree for earth". I wasn't considering the moral good vs harm that you bring up, so I had been reading the the discussion from the priorities of minimizing unnecessary computing scope creep, where LLMs are being pointed to as a major aggressor. While I don't disagree with you and those who feel that statement is anti-human (another responder said this), this is what I think parent was conveying, not that all human action is immoral to some degree.

    • Yes, this is what I meant. I used the word "harmful" in the context of the argument that LLMs are harmful because they consume resources (i. e. increase entropy).

      But everything humans do does that. Everything increases entropy. Sometimes we find that acceptable. So when people respond to Pike by pointing out that he, too, is part of society and thus cannot have the opinion that LLMs are bad, I do not find that argument compelling, because everybody draws that line somewhere.

> Somebody burned compute to send him an LLM-generated thank-you note. Everybody involved in this transaction lost, nobody gained anything from it. It's pure destruction of resources.

Just like the invention of Go.

> Somebody burned compute to send him an LLM-generated thank-you note. Everybody involved in this transaction lost, nobody gained anything from it. It's pure destruction of resources.

Well the people who burnt compute got it from money so they did burn money.

But they don't care about burning money if they can get more money via investors/other inputs faster than they can burn (fun fact: sometimes they even outspend that input)

So in a way the investors are burning their money, now they burn the money because the market is becoming irrational. Remember Devin? Yes cognition labs is still there etc. but I remember people investing into these because of their hype when it did turn out to be moot comparative to their hype.

But people/market was so irrational that most of these private equities were unable to invest in something like openai that they are investing in anything AI related.

And when you think more deeper about all the bubble activities. It becomes apparent that in the end bailouts feel more possible than not which would be an tax on average taxpayers and they are already paying an AI tax in multiple forms whether it be in the inflation of ram prices due to AI or increase in electricity or water rates.

So repeat it with me: whose gonna pay for all this, we all would but the biggest disservice which is the core of the argument is that if we are paying for these things, then why don't we have a say in it. Why are we not having a say in AI related companies and the issues relating to that when people know it might take their jobs etc. so the average public in fact hates AI (shocking I know /satire) but the fact that its still being pushed shows how little influence sometimes public can have.

Basically public can have any opinions but we won't stop is the thing happening in AI space imo completely disregarding any thoughts about the general public while the CFO of openAI proposing an idea that public can bailout chatgpt or something tangential.

Shaking my head...

Somebody just burned their refuse in a developing country somewhere. I guess if it was cold, at least they were warming themselves up.

Cutting trees for fuel and paper to send a letter burned resources. Nobody gained in that transaction

  • I shouldn't have to explain this, but a letter would involve actual emotion and thought and be a dialog between two humans.

    • When the thought is "I'd like this person to know how grateful I am", the medium doesn't really matter.

      When the thought is "I owe this person a 'Thank You'", the handwritten letter gives an illusion of deeper thought. That's why there are fonts designed to look handwritten. To the receiver, they're just junk mail. I'd rather not get them at all, in any form. I was happy just having done the thing, and the thoughtless response slightly lessens that joy.

    • We’re well past that. Social media killed that first. Some people have a hard time articulating their thoughts. If AI is a tool to help, why is that bad?

      5 replies →

    • I shouldn't have to explain this, but a letter is a medium of communication, that could just as easily be written by a LLM (and transcribed by a human onto paper).

      4 replies →

  • Someone taking the time and effort to write and send a letter and pay for postage might actually be appreciated by the receiver. It’s a bit different from LLM agents being ordered to burn resources to send summaries of someone’s work life and congratulating them. It feels like ”hey look what can be done, can we get some more funding now”. Just because it can be done doesn’t mean it adds any good value to this world

  • How is it that so many people who supposedly lean towards analytical thought are so bad at understanding scale?