← Back to context

Comment by candiodari

7 months ago

A) This is short-sighted. What you're suggesting is in fact a way to optimize short-term gain over long-term viability. It's pure MBA tactics.

Additionally, it's complete and total oversimplification. If you look at Google's earnings it's pretty damn clear that at least until 2020 they were not just going for maximum total spend, but for a steady, gradual raise in total spend. Not too slow, not too fast. They were NOT taking every opportunity they had, in fact they're famous for systematically refusing many opportunities (see the original founders' letter, but even after that). They were farming the ad market, the ad spend, growing it, nurturing it. Then COVID blew up the farm.

Maybe you're right now, but I do hope they're recovering their old tactics. Because if they maximize it you'd see nothing but scams ... wait a second.

B) Google was built by providing a vision, and getting out of the way of ground-up engineer efforts. "Scaring workers into compliance" IS killing the golden goose.

You can see this in AI. Every story from an AI engineer that ran away from Google is the same. They didn't run away for the money, they ran away because they were getting scared into compliance.

Now AI may make it, or not. I don't know. But this is happening EVERYWHERE in Google. Every effort. Every good idea, and every bad idea runs away, usually inside the mind of "a worker". Not to make them personally maximum money, but it's natural selection: if the idea doesn't run away, the engineer it's in is "scared into compliance", into killing the idea.

Whatever the next big thing turns out to be, it simply cannot come out of Google. And it will hit suddenly, just like it did for Yahoo.

Totally agree on the overall prognosis of Google - I am (also?) one of said engineers! Here’s a recent update from a tiny corner of the company: the rank and file is still incredibly smart and generally well-intentioned, but are following hollow simulacrums of the original culture - all-hands, dogfooding, internal feedback, and ground-up engineering priorities are all maintained in form, but they are now rendered completely functionless. I am personally convinced that the company is — or was, before ChatGPT really took off - focused on immediate short term stock value above all else. After all, if you were looking down the barrel of multiple federal and EU antritrust suits and dwindling public support for the utility you own and operate, you might do the same…

I guess I’m standing up for the simple idea that terribly inefficient organizations can prevail when they’re the incumbents, at least for significant periods if not forever. We can’t be complacent and assume they’ll fall on their own, esp when AGI threatens social calcification on an unheard of scale.

  • Drop your good intentions - towards Google, that is.

    Work to sabotage and collapse the organization - do that for the good of humanity.

    Thank you for your work, and good luck getting out without harm or reprisal <3

    Hit em hard.

    • Why would Google's collapse be for the good of humanity? When was a power vacuum ever beneficial?

      "Build a better search engine for the good of humanity", I can understand. "Kill a search engine for the good of humanity" is a reductive, childish take.

      1 reply →

    • Very much appreciate the sentiment and kind words! Reminds me of Yudkowsky’s line[1] about AI: “we should be willing to destroy a rogue datacenter by airstrike.” This kind of talk sounds insane in the Silicon Valley language game, but we’re talking about real people’s lives here and sometimes implied violence needs to be made explicit. And that’s what I see your suggestion as, ultimately —- but that’s probably because I got an American HS education, so the Malcom X vs. MLK Jr. debate was driven into my mind quite thoroughly.

      [1] https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-no...

      Luckily/unluckily I left already due to factors out of my control. Regardless, for all of Google’s faults I will say that they were incredibly serious about data security and respecting consumer data protection laws with strict oversight, so I think “sabotage” in a direct sense would be incredibly hard + risky. The only solution I see is continuing to organize for government regulation. I would include worker organization within Google, but I recently learned they represent less than half a percent of the company…

      1 reply →

> You can see this in AI. Every story from an AI engineer that ran away from Google is the same. They didn't run away for the money, they ran away because they were getting scared into compliance.

Can you elaborate?