Comment by ineedasername

2 months ago

The company he's worked for nearly a quarter century has enabled & driven more consumerist spend in all areas of the economy via behaviorally targeted optimized ad delivery, driving far more resources and power consumption by orders of magnitude compared to the projected increases of data centers over the coming years. This level of vitriol seems both misdirected and practically obtuse in lacking awareness of the part his work has played in far, far, far more expansive resource expenditure in service to work far less promising for overall advancement, in ad tech and algorithmic exploitation of human psychology for prolonged media engagement.

To expand on my comment wrt "promising for overall advancement": My daughter, in her math class: Her teacher- I'll reserve overall judgement on their teaching: she may be perfectly adequate as a teach for other students, which is part of my point- simply doesn't teach in the same sense other teachers do: present topic, leave details of "figuring out how to apply methods" to the students. Doesn't work for my daughter, who has never done less than excellent in math previously. She realized she ChatGPT (we monitor usage) for any way of explaining things that "simply worked" for how she can engage with explanations. Math has never been as easy for her, even more so than before, and her internalization of the material is achieving a near-intuitive understanding.

Now consider: the above process is available and cheap to every person in the world with a web browser (we don't need to pay for her to have a plus account). If/when ChatGPT starts doing ridiculous intrusive ads, a simple Gemma 3 1b model will do nearly as good a job) This is faster and easier and available in more languages than anything else, ever, with respect to individual-user tailored customization simply by talking to the model.

I don't care how many pointless messages get sent. This is more valuable than any single thing Google has done before, and I am grateful to Rob Pike for the part his work has played in bring it about.

  • Seconded — "AI" is a great teaching resource. All bigger models are great at explaining stuff and being good tutors, I'd say easily up to the second year of graduate studies. I use them regularly when working with my kid and I'm trying to teach them to use the technology, because it is truly like a bicycle for the mind.

Don't be ridiculous. Google has been doing many things, some of those even nearly good. The super talented/prolific/capable have always gravitated to powerful maecenases. (This applies to Haydn and Händel, too.) If you uncompromisingly filter potential employers by "purely a blessing for society", you'll never find an employment that is both gainful and a match for your exceptional talents. Pike didn't make a deal with the devil any more than Leslie Lamport or Simon Peyton Jones did (each of whom had worked for 20+ years at Microsoft, and has advanced the field immensely).

As IT workers, we all have to prostitute ourselves to some extent. But there is a difference between Google, which is arguably a mixed bag, and the AI companies, which are unquestionably cancer.

  • I am not so sure about 'the mixed bag' vs 'unquestionably cancer', but I think the problem is that he is complaining while working for such a company.

    • Not a problem at all. I’m not sure why you feel the need to focus on all the un-interesting parts. The interesting parts are what he said and weather or not those are true. Not sure why is more important who said what, rather than what was said especially if this doesn’t add much to the original discussion… it just misdirects attention without a clear indication to the motive!

  • > Don't be ridiculous.

    OP says, it is jarring to them that Pike is as concerned with GenAI as he is, but didn't spare a thought for Google's other (in their opinion, bigger) misgivings, for well over a decade. Doesn't sound ridiculous to me.

    That said, I get that everyone's socio-political views change are different at different points in time, especially depending on their personal circumstances including family and wealth.

    • > didn't spare a thought for Google's other (in their opinion, bigger) misgivings, for well over a decade

      That's the main disagreement, I believe. I'm definitely not an indiscriminate fan of Google. I think Google has done some good, too, and the net output is "mostly bad, but with mitigating factors". I can't say the same about purely AI companies.

  • > As IT workers, we all have to prostitute ourselves to some extent.

    No, we really don't. There are plenty of places to work that aren't morally compromised - non-profits, open source foundations, education, healthcare tech, small companies solving real problems. The "we all have to" framing is a convenient way to avoid examining your own choices.

    And it's telling that this framing always seems to appear when someone is defending their own employer. You've drawn a clear moral line between Google ("mixed bag") and AI companies ("unquestionably cancer") - so you clearly believe these distinctions matter even though Google itself is an AI company.

    • > non-profits

      I think those are pretty problematic. They can't pay well (no profits...), and/or they may be politically motivated such that working for them would mean a worse compromise.

      > open source foundations

      Those dreams end. (Speaking from experience.)

      > education, healthcare tech

      Not self-sustaining. These sectors are not self-sustaining anywhere, and therefore are highly tied to politics.

      > small companies solving real problems

      I've tried small companies. Not for me. In my experience, they lack internal cohesion and resources for one associate to effectively support another.

      > The "we all have to" framing is a convenient way to avoid examining your own choices.

      This is a great point to make in general (I take it very seriously), but it does not apply to me specifically. I've examined all the way to Mars and back.

      > And it's telling that this framing always seems to appear when someone is defending their own employer.

      (I may be misunderstanding you, but in any case: I've never worked for Google, and I don't have great feelings for them.)

      > You've drawn a clear moral line between Google ("mixed bag") and AI companies ("unquestionably cancer")

      I did!

      > so you clearly believe these distinctions matter even though Google itself is an AI company

      Yes, I do believe that.

      Google has created Docs, Drive, Mail, Search, Maps, Project Zero. It's not all terribly bad from them, there is some "only moderately bad", and even morsels of "borderline good".

      2 replies →

  • > But there is a difference between Google, which is arguably a mixed bag, and the AI companies, which are unquestionably cancer

    Google's DeepMind has been at the forefront of AI research for the past 11+ years. Even before that, Google Brain was making incredible contributions to the field since 2011, only two years after the release of Go.

    OpenAI was founded in response to Google's AI dominance. The transformer architecture is a Google invention. It's not an exaggeration to claim Google is one of the main contributors to the insanely fast-paced advancements of LLMs.

    With all due respect, you need some insane mental gymnastics to claim AI companies are "unquestionably cancer" while an adtech/analytics borderline monopoly giant is merely a "mixed bag".

    • > you need some insane mental gymnastics

      Perhaps. I dislike google (have disliked it for many years with varying intensity), but they have done stuff where I've been compelled to say "neat". Hence "mixed bag".

      This "new breed of purely AI companies" -- if this term is acceptable -- has only ever elicited burning hatred from me. They easily surpass the "usual evils" of surveillance capitalism etc. They deceive humanity at a much deeper level.

      I don't necessarily blame LLMs as a technology. But how they are trained and made available is not only irresponsible -- it's the pinnacle of calculated evil. I do think their evil exceeds the traditional evils of Google, Facebook, etc.

  • Okay, but the discourse Rob Pike is engaging in is, “all parts of an experience are valid,” so you see how he’s legitimately in a “hypocrisy pickle”

You're not wrong about the effects and magnitude of targeted ads but that doesn't preclude Pike from criticizing what he believes to be a different type of evil.

  • Sure, but it also doesn't preclude him from being wrong, or at least incomplete as expressed, about his work having the exact same resource-consuming impact when used for ad tech, or addition impact with toxic social media.

He worked on: Go, the Sawzall language for processing logs, and distributed systems. Go and Sawzall are usable and used outside Google.

Are those distributed systems valuable primarily to Google, or are they related to Kubernetes et cetera ?

  • He was paid by Google with money made through Google’s shady practices.

    It’s like saying that it’s cool because you worked on some non-evil parts of a terrible company.

    I don’t think it’s right to work for an unethical company and then complain about others being unethical. I mean, of course you can, but words are hollow.

Google is huge. Some of the things it does are great. Some of the things it does are terrible. I don't think working for them has to mean that you 100% agree with everything they do.

  • If it's "Who is worse Google or LLMs?", I think I'll say Google is worse. The biggest issue I see with LLMs is needing to pay a subscription to tech companies to be able to use them.

    • You don't even need to do that- pay a subscription, I mean. A gemma 3 4b model will run on near potato hardware at usable speeds and achieves performance for many purposes on part with ChatGPT 3.5 turbo or better in many tasks much more beneficial than ad tech and min/max'ing media engagement. Or the free versions of many SOTA web LLMs, all free, to the world, if you have a web browser.

What are you implying ? That he’s a hypocrite ? So he’s not allowed to have opinions ? If anything he’s in a better position than a random person . And Google is a massive enterprise, with hundreds of divisions. I imagine Pike and his peers share your reluctance

  • “I collected tons of money from Hitler and think Stalin is, like, super bad.” [sips Champagne]

    Of course, the scale is different but the sentiment is why I roll my eyes at these hypocrites.

    If you want to make ethical statements then you have to be pretty pure.

    • Are any of us better? We’re all sellouts here, making money off sleazy apps and products.

      I’m sorry but comparing Google to Stalin or Hitler makes me completely dismiss your opinion. It’s a middle school point of view.

I agree completely. Ads have driven the surveillance state and enshitification. It's allowed for optimized propaganda delivery which in turn has led to true horrors and has helped undo a century of societal progress.

  • This is a tangent, but ads have become a genuine cancer on our world, and it's sad to see how few people really think about it. While Rob Pike's involvement in this seems to be very minimal, the fact that Google is an advertising company through-and-through does weaken the words of such a powerful figure, at least a little bit.

    If I had a choice between deleting all advertising in the world, or deleting all genAI that the author hates, I would go for advertising every single time. Our entire world is owned by ads now, with digital and physical garbage polluting the internet and every open space in the real world around us. The marketing is mind-numbing, yet persuasive and well-calculated, a result of psychologists coming up with the best ways to abuse a mind into just buying the product over the course of a century. A total ban on commercial advertising would undo some of the damage done to the internet, reduce pointless waste, lengthen product lifecycles, improve competition, temper unsustainable hype, cripple FOMO, make deceptive strategies nonviable. And all of that is why it will never be done.

    • > If I had a choice between deleting all advertising in the world, or deleting all genAI that the author hates, I would go for advertising every single time.

      but wait, in a few months, "AI" will be be funded entirely by advertising too!

  • Yeah, I've built ad systems. Sometimes I'd give a presentation to some other department of programmers who worked on content, and someone would ask the tense question: Not to be rude, but aren't ads bad?

    And I'd promptly say: Ads are propaganda, and a security risk because it executes 3rd party code on your machine. All of us run adblockers.

    There was no need for me to point out that ads are also their revenue generator. They just had a burning moral question before they proceeded to interop with the propaganda delivery system, I guess.

    It would lead to unnecessary cognitive dissonance to convince myself of some dumb ideology to make me feel better about wasting so much of my one (1) known life, so I just take the hit and be honest about it. The moral question is what I do about it, if I intervene effectively to help dismantle such systems and replace them with something better.

    • Honestly, there is a place where ads can be useful and helpful. It's just not in the way our society has structured them. My best example is gaming news sites. I like video games, and when I want to see whats new I go to a gaming news site, or a gaming forum, often these are even joined in partnership.

      It's opt-in, I see all the new games, big budget games, indie games, nothing is missed. There are no unwanted emails, no biased searches, no interrupting ads, it's all on my own terms. And it works!

      I really believe we can extend this model to other product categories, even all categories. Not in the exact way as gaming websites, but an opt-in "go to the market to see new cool shit" sort of way. It doesn't have to be propaganda with surveillance technology like it is now.