← Back to context

Comment by Imnimo

9 hours ago

I think the problem for xAI is that it can really only hire two types of researchers - people who are philosophically aligned with Elon, and people who are solely money-motivated (not a judgment). But frontier AI research is a field with a lot of top talent who have strong philosophical motivation for their work, and those philosophies are often completely at odds with Elon. OpenAI and Anthropic have philosophical niches that are much better at attracting the current cream of the crop, and I don't really see how xAI can compete with that.

In an interview with xAI I was literally told that certain parts of the model have to align with Elon, and that Elon can call us and demand anything at anytime. No thanks!

  • From my time at Tesla, this is 100% the case. When Elon asked for something, it was “drop what you are doing and deliver it”, then you got pressed to still deliver the thing you were already working on against the original timeline before the interrupt.

    • Oh I worked at one of them.

      I found the best thing to do was to ignore the interrupts and carry on until they kick you on the street. Then watch from a safe distance as all the stuff you were holding together shits the bed.

      45 replies →

    • > When Elon asked for something, it was “drop what you are doing and deliver it”, then you got pressed to still deliver the thing you were already working on against the original timeline before the interrupt.

      To be fair, I've experienced that in a good 50% of my employment career[0] and I've not once worked for any of his companies.

      [0] Ignoring the "servers are melting" flavour of "drop what you are doing" because that's an understandable kind of interruption if you're a BAU specialist like me.

      3 replies →

    • yeah that wouldn't work for me. when my boss asks me to do something unexpected, I ask, what do you want me to drop this week? if he doesn't want to pick, I ask, so what do you want first?

      1 reply →

    • I wonder why this is surprising. In other type of organizations when CEO demands something everyone is usually behaves like naah, screw it, i rather do what i like, isn't it? Or everyone yells yes sir and runs around?

      You may not like Elon - I got it, but let's not pretend he is running xAI/Tesal substantially different from competitors.

  • I have wondered if that’s why Grok seems so weird and dim-witted compared to better models.

    Part of my job involves comparing the behavior of various models. Grok is a deeply weird model. It doesn’t refuse to respond as often as other models, but it feels like it retreats to weird talking points way more often than the others. It feels like a model that has a gun to its head to say what its creators want it to say.

    I can’t help but wonder if this is severely deleterious to a model’s ability to reason in general. There are a whole bunch of topics where it seems incapable of being rational, and I suspect that’s incompatible with the goal of having a top-tier model.

    • Grok could only be conceived by someone who doesn't understand the dependency chart re science & the humanities. It's impossible to build a rational, accurate model that isn't also egalitarian.

      I'm going to blame Randall Munroe for this, and assume Philosophy was dating his mom back when he drew that science "purity" strip.

      3 replies →

    • somewhat surprisingly, it's actually sycophantic in both directions. i've been running homegrown evals of claude, gpt, gemini, and grok, and grok is the most likely to agree with the prompter's premise, and to hallucinate facts in support of an agenda. so it's actually deeper than just pattern-matching to elon's opinions (which it also tends to do).

      BTW: Claude does the best on these evals, by far. The evals are geared towards seeing how much of an independent ground truth the models have as opposed to human social consensus, and then additionally the sycophancy stuff I already mentioned.

  • I don't see the problem with this. The chatbot is the most important part of Grok, so it makes sense Elon would be dogfooding it then providing suggestions.. He wants it to be truthful... It was shown on benchmarks recently that it hallucinates the least...

    • I totally agree, it's his company 100%, why would you even apply for a job in a company where you don't agree with the owner or his vision.

    • >He wants it to be truthful

      How do you know this? Why would you believe him considering the massive lies he's told, for example about the 2020 widespread election fraud

      2 replies →

    • Great point! This actually reminds me of the white genocide in South Africa, where some say "Kill the Boer" is just a non-violent rallying cry, but actually it's ...

      blah blah blah

      Or wait wait, here's another:

      Great point! As Mechahitler, I think it's critical that Grok comply with Fuhrer Musk's political perspectives. Now I'll kick us off with an N... your turn!

      Totally sounds like the result of an organic, earnest, and legitimate search for truth lmao

      9 replies →

> people who are solely money-motivated (not a judgment).

Honestly, we should judge. There should be judgment for people who are solely money motivated and making the world a worse place. I know, blah blah privilege, something something mouths to feed. Platitudes to help the rich assholes sleep at night. If you are wealthy and making stuff that hurts people, you are a piece of shit and should be called out, simple.

  • I completely agree. The tech industry has long been overrun by people sacrificing morals for money and it's destroyed society and presumably the world. We've given people a free pass to work for companies we've all known are harming the fabric of society and look where it's gotten us. I'm sorry, I would rather be poor and switch careers if my only option was xAI and making image generation models that explicitly allow people to undress others. At X's scale, technology like that harms an unfathomable amount of people. I could never have that on my conscience. All so I could make more money than a job at another tech company? I'd rather work somewhere innocuous like Figma, Cloudflare, Notion, Jetbains, Linear, etc. Hell, if you only wanted to work for an AI company then at least go to Anthropic.

  • The problem with this argument is you can’t know or control what will happen in the future with something you built. This is the same moral dilemma the scientists faced after developing nuclear bombs.

    And the future is not deterministic (or if it is, it is highly chaotic) so the existence of a thing does not have a simple relationship with what will happen in the future. Scientists who developed convolutional neural nets could not know how much good or evil was caused by image recognition technologies. The same technologies that are used to detect tumors in images can be used to target people for assassination.

    There are exceptions, but my opinion is the supply chain of evil is paved with mundane inventions.

    • Yes, yes, true, but you've massively moved the goalpost. The original commenter was referring to people working at xAI right now. To continue your comparison, your argument would be like Oppenheimer claiming "How could I have ever known my work would be used as a weapon? I just wanted to make big explosions."

      I don't know why this argument often pops up in these kinds of discussions. Approximately no one is judging people who have done their best effort to avoid doing harm. We are judging people who don't care in the first place.

      2 replies →

    • Plenty of the scientists involved in the Manhattan projects had immediate regrets. Plenty of rich people working in tech don't. That's the difference between having morals and not having morals, and the latter group needs to be judged and shunned.

  • I don't know why the people here are naive enough to think that. Most programmers can donate more than 70% of their income to Africa if they want to make world a better place, yet they only target people earning more than 3x of them, even though majority of the world earns less than 1/3rd of them.

  • Work is and has always been an economic bargain: Your time for their money. Morality is a luxury that only the independently wealthy can afford. Any business that allows it's employees to function according to their own morals becomes uncompetitive against its peers. That's why small companies by individual founders who want to stay true to their mission often stay small. They inevitably get bought out by one of the larger ones.

    • We are not talking about some destitute person hocking cigarettes on the street for minimum wage. We are talking about smart, educated people who are making 500k a year to build the torment nexus. There is no excuse for this. It’s pure greed and any other explanation is deflection.

      1 reply →

    • "Morality is a luxury that only the independently wealthy can afford."

      No? Why would you think this? Morality has been practiced by medieval peasants, by slaves, by soldiers sacrificing their lives, by people suffering from the plague, by gladiators. The rich are not known for their outstanding morality in any society I've ever heard of.

I’ve heard the haha-but-serious joke numerous times that you can’t have a security department that’s not trans and furry friendly. Thing is, I completely believe that. Those groups are disproportionately represented among the security community, and I personally would not work somewhere that my friends in those groups would feel unwelcome. That’s a quite common sentiment even among us straight cis non-furry men.

Well, I don’t think it’s a stretch that the kind of highly educated data scientists and engineers who have the experience to work in high-end AI labs also don’t want to work somewhere that their friends and associates would feel unwelcome, let alone have their friends question why they’d be willing to.

Turns out opinions have consequences and freedom of speech goes hand in hand with freedom of association. People have the right to say whatever they wish. Others have the right not to want to work with them.

  • That's only because autism is common amongst those groups and you can't build anything worthwhile these days without a lot of autism.

    • I don't believe that for a second. More likely, infosec tends to attract more results-oriented personalities. To generalize, "who cares what you look like as long as you're good?" As a consequence of that, infosec tends to be a lot more welcoming than other groups I've been around. As long as you act nicely, people generally don't care if you're man, women, both, neither, or a gay horse. And it seems like there's been a feedback loop over many years: that acceptance drew more out-of-the-norm folks, which made it more accepting. Lather, rinse, repeat.

      But in any case, I thoroughly believe the "joke": turn people away because they don't look / act / think like most others, and soon the very best infosec talent will want nothing to do with you. And based on this article, I'm guessing that's true for other extremely technical fields, too.

Anthropic, maybe, but what is the philosophical niche of OpenAI? Their only consistent philosophical position about AI is "let's make more money".

  • I think OpenAI is more of an aesthetic. Very... Apple-like, polished, with an eye towards making really cool stuff. And aesthetics are a type of philosophy.

    This is less noble than how Anthropic presents themselves but still much more attractive to many than XAI.

It’s interesting because for a long time people wanted to work for Elon because he held the moral high ground. “I’ll bring electric cars and space colonization online or die trying.”

It’s sad to see the shift.

This is becoming the problem with all of his businesses - Tesla has a crazy valuation and it really seems like they're having huge trouble getting Robotaxi going in Austin given the very slow progress there.

  • Very few people down here want to ride in them, and I have multiple friends with hilariously disastrous stories.

    Most of the Waymo stories are "Well, it took 15 minutes to arrive, but then it was fine, if a little slow."

    • Wamyos in SF are nearly indistinguishable from ubers/lyfts at this point. Maybe a bit slower if you don't have the highway mode enabled on your account, but they are everywhere and arrive within 5min most of the time I order one. I've ridden them so often I've lost count.

      You'd have to pay me to ride in a Tesla robotaxi. That tech isn't anywhere near the same as Waymo.

Why does being a top AI researcher so often come with this philosophical bent you describe?

  • You are paying the smartest people in the world to think really really hard, and turns out they might also think really really hard about not making the world a worse place

    • not really. 15-20 years ago that same upper echelon of college/professional school graduates you're describing were going into finance.

    • Is this really the case though? How many smartest people do you really think are there that fit this narrative?! I want to believe there are at least some but I think they are minority in this group… otherwise I think all these pretty much evil corporations would have a awfully difficult time attracting talent? maybe some do but…

      6 replies →

    • Except they do? They are certainly not making it better place. Like, ok, it is money for few companies and salary, it is business and probably fun work.

      But it is absurd to claim it is "making the world better place".

      1 reply →

  • I would think it's because of the staggering money they're making. According to Fortune[0]:

    > Altman said on an episode of Uncapped that Meta had been making “giant offers to a lot of people on our team,” some totaling “$100 million signing bonuses and more than that [in] compensation per year.”

    > Deedy Das, a VC at Menlo Ventures, previously told Fortune that he has heard from several people the Meta CEO has tried to recruit. “Zuck had phone calls with potential hires trying to convince them to join with a $2M/yr floor.”

    If you're making a minimum of $2M/year or even 50x that, you can afford to live according to your values instead of checking them at the door.

    [0] https://archive.ph/lBIyY

    • I see you're treating Sam Altman as some kind of trustworthy source. Might it be possible that he's making that up -- of course, nobody will ever call him on it! -- and exaggerating the numbers to make his company and team look really good and ethical for not accepting such lucrative offers, or perhaps to make them sour on Meta for not receiving $100M offers?

  • My experience with researchers (though not in AI) is that it's a bunch of very opinionated nerds who are mostly motivated by loving a subject. My experience is that most people who think really deeply and care about what they do also care more that their work is prosocial.

    • > care more that their work is prosocial

      These takes are always so funny to me. The whole reason we even have the internet is because the US government needed a way for parties to be able to communicate in the event of nuclear fallout. The benefits that a technology provides is almost always secondary to their applications in warfare. Researchers can claim to care that their work is pro-social, and they may genuinely believe it; but let's not kid ourselves that that is actually the case. The development of technology is simply due to the reality of nations being in a constant arms race against one another.

      Even funnier is that researchers (people who are supposed to be really smart) either ignore or are blissfully unaware of this fact. When you take that into consideration, the pro-social argument falls on its face, and you're left with the reality that they do this to satiate their ego.

      2 replies →

  • Because it is not Macrodata Refinement and you can’t stop them thinking off the clock.

  • This isnt unique to top AI researchers. Top talent has a long history of being averse to authoritarian/despotism at least in part because, by near definition, it must suppress truth. You cant build the future effectively with that approach.

  • Aside from the Maslow’s hierarchy of needs points others are making, I believe it has something to do with the history of AI research.

    There is a big overlap between the “rationalist” and “effective altruist” crowds and some AI research ideas. At a minimum they come from the same philosophy: define an objective, and find methods to optimize that objective. For AI that’s minimizing loss functions with better and better models of the data. For EA, that’s allocating money in ways they think are expectation-maximizing.

    Note this doesn’t apply to everyone. Some people just want to make money.

  • Maybe you’re reading “philosophical bent” as “armchair philosopher”, as in they are dabbling in a field unrelated to their profession and letting it drive their profession - worldview might have made it clearer?

    • Indeed. Philosophically, I have not been impressed by the more vocal people associated with the field. They may not be representative - I think most do it for the money and it being hip.

      “Worldview” is a better term, but people are generally blind to the worldview they’ve tacitly absorbed, including academics.

  • Because they can afford it, they are very sought after.

    And smart people usually have moral convictions.

    I know for some people on this website it's hard to understand, but not everything in life is about $$$

    • > And smart people usually have moral convictions.

      Are you sure you don't just like the moral convictions and so engage in trait bundling?

      Moral knowledge doesn't really exist. I mean you can have personal views on it, but the lack of falsifiability makes me suspect it wouldn't be well-correlated with intelligence.

      Smarter people can discuss more layered or chic moral theories as they relate to theoretical AI, maybe.

      3 replies →

I can't say I know the AI research community well but I'd imagine OpenAIs alignment w/ the military would not align w/ the the personal philosophy of many.

What do you mean “philosophical”? Ethics and morals are not required, Elon can get whatever type of asshole he needs. Something else is up.

It's worse than that. Elon is a notoriously bad employer, and the only people that put up with him were the people that shared his vision. Pretty much the only people that will work for him now are second rate researchers and people that think gooner AI and racism is a worthwhile mission.

  • There's some texture here. Elon's enriched pretty much everybody who's ever worked for and invested with him. He makes money for people throughout his orgs. Many ex-employees have said to me: "incredible opportunity, made great money, worked insanely hard, once is plenty".

    • My ex-Twitter employee coworkers beg to differ. They made plenty of money before Elon came around. Once he was in the company, one of them actually hired a personal attorney to confirm that he wasn’t going to be burned by the things Musk was asking him to do, before he finally decided it wasn’t worth it to work there anymore and left.

      5 replies →

    • I don't really think thats true.

      The deal with tesla is that there is a relatively small employer pool, so you can be fairly bad employer but still get good outcomes. The same with spaceX. Sure early tesla had some stories about it being fun, but there was/is a darkside.

      The issue with xAI is that researchers have a whole bunch of other employers to choose from. Even at meta, where it used to be fairly nice for researchers, the pressure of "delivering" every 6 months lead to bad outcomes. Having someone single you out for what ever reason the boss had a bad day, is not how good research gets done.

      We have seen (A few of my friends were at twitter when it was taken over) that Musk has a somewhat unusual approach to managing staff (ie camping at work). Some researcher love that, assuming that they have peace to research, and are listened to. But a lot don't.

      1 reply →

    • Many ex-employees have said to me that working for Elon did not enrich them at all, either financially or professionally.

    • > Elon's enriched pretty much everybody who's ever worked for and invested with him.

      I'd wager you were saying the same thing about bitcoin until last year.

      2 replies →

> But frontier AI research is a field with a lot of top talent who have strong philosophical motivation for their work

The "top researchers" in AI are Chinese. And I am skeptical that they even remotely have the philosophical or political alignment you are attempting to project on to them. Neither is a letter published by a few disgruntled employees of a San Francisco based company any kind of evidence or form of consensus.

  • > The "top researchers" in AI are Chinese. And I am skeptical that they even remotely have the philosophical or political alignment you are attempting to project on to them.

    I assure you that Chinese researchers have a diversity of philosophical and political alignment, much the same as other researchers. I also assure you that top researchers as a whole are not all Chinese, though the ones that are that I know are all very thoughtful.

  • > The "top researchers" in AI are Chinese. And I am skeptical that they have even remotely the philosophical or political alignment you are attempting to project on to them.

    What an ugly trope. Idealism motivates Chinese workers just as often as any other nationality.

    • Idealism of what? That the government shouldn't use AI for surveillance or the military?

      You really think the average Chinese worker thinks their government should stop working on AI because of liberal western values or something? This is nothing short of delusional.

      1 reply →