Elon Musk pushes out more xAI founders as AI coding effort falters

11 hours ago (ft.com)

https://archive.ph/rP4cb (text at bottom)

https://x.com/elonmusk/status/2032201568335044978, https://xcancel.com/elonmusk/status/2032201568335044978

https://economictimes.indiatimes.com/tech/artificial-intelli...

https://futurism.com/artificial-intelligence/elon-musk-screw...

All: please stick to thoughtful, substantive discussion. You may not owe you-know-whom better, but you owe this community better if you're participating in it.

If you don't have a thoughtful, substantive comment to add, not commenting is also a good option. There are quite a few interesting submissions to talk about.

https://news.ycombinator.com/newsguidelines.html

I think the problem for xAI is that it can really only hire two types of researchers - people who are philosophically aligned with Elon, and people who are solely money-motivated (not a judgment). But frontier AI research is a field with a lot of top talent who have strong philosophical motivation for their work, and those philosophies are often completely at odds with Elon. OpenAI and Anthropic have philosophical niches that are much better at attracting the current cream of the crop, and I don't really see how xAI can compete with that.

  • In an interview with xAI I was literally told that certain parts of the model have to align with Elon, and that Elon can call us and demand anything at anytime. No thanks!

    • From my time at Tesla, this is 100% the case. When Elon asked for something, it was “drop what you are doing and deliver it”, then you got pressed to still deliver the thing you were already working on against the original timeline before the interrupt.

      45 replies →

    • I have wondered if that’s why Grok seems so weird and dim-witted compared to better models.

      Part of my job involves comparing the behavior of various models. Grok is a deeply weird model. It doesn’t refuse to respond as often as other models, but it feels like it retreats to weird talking points way more often than the others. It feels like a model that has a gun to its head to say what its creators want it to say.

      I can’t help but wonder if this is severely deleterious to a model’s ability to reason in general. There are a whole bunch of topics where it seems incapable of being rational, and I suspect that’s incompatible with the goal of having a top-tier model.

      4 replies →

    • I don't see the problem with this. The chatbot is the most important part of Grok, so it makes sense Elon would be dogfooding it then providing suggestions.. He wants it to be truthful... It was shown on benchmarks recently that it hallucinates the least...

      18 replies →

  • > people who are solely money-motivated (not a judgment).

    Honestly, we should judge. There should be judgment for people who are solely money motivated and making the world a worse place. I know, blah blah privilege, something something mouths to feed. Platitudes to help the rich assholes sleep at night. If you are wealthy and making stuff that hurts people, you are a piece of shit and should be called out, simple.

    • I completely agree. The tech industry has long been overrun by people sacrificing morals for money and it's destroyed society and presumably the world. We've given people a free pass to work for companies we've all known are harming the fabric of society and look where it's gotten us. I'm sorry, I would rather be poor and switch careers if my only option was xAI and making image generation models that explicitly allow people to undress others. At X's scale, technology like that harms an unfathomable amount of people. I could never have that on my conscience. All so I could make more money than a job at another tech company? I'd rather work somewhere innocuous like Figma, Cloudflare, Notion, Jetbains, Linear, etc. Hell, if you only wanted to work for an AI company then at least go to Anthropic.

    • I don't know why the people here are naive enough to think that. Most programmers can donate more than 70% of their income to Africa if they want to make world a better place, yet they only target people earning more than 3x of them, even though majority of the world earns less than 1/3rd of them.

    • Work is and has always been an economic bargain: Your time for their money. Morality is a luxury that only the independently wealthy can afford. Any business that allows it's employees to function according to their own morals becomes uncompetitive against its peers. That's why small companies by individual founders who want to stay true to their mission often stay small. They inevitably get bought out by one of the larger ones.

      3 replies →

    • The problem with this argument is you can’t know or control what will happen in the future with something you built. This is the same moral dilemma the scientists faced after developing nuclear bombs.

      And the future is not deterministic (or if it is, it is highly chaotic) so the existence of a thing does not have a simple relationship with what will happen in the future. Scientists who developed convolutional neural nets could not know how much good or evil was caused by image recognition technologies. The same technologies that are used to detect tumors in images can be used to target people for assassination.

      There are exceptions, but my opinion is the supply chain of evil is paved with mundane inventions.

      4 replies →

  • Anthropic, maybe, but what is the philosophical niche of OpenAI? Their only consistent philosophical position about AI is "let's make more money".

    • I think OpenAI is more of an aesthetic. Very... Apple-like, polished, with an eye towards making really cool stuff. And aesthetics are a type of philosophy.

      This is less noble than how Anthropic presents themselves but still much more attractive to many than XAI.

      6 replies →

  • I’ve heard the haha-but-serious joke numerous times that you can’t have a security department that’s not trans and furry friendly. Thing is, I completely believe that. Those groups are disproportionately represented among the security community, and I personally would not work somewhere that my friends in those groups would feel unwelcome. That’s a quite common sentiment even among us straight cis non-furry men.

    Well, I don’t think it’s a stretch that the kind of highly educated data scientists and engineers who have the experience to work in high-end AI labs also don’t want to work somewhere that their friends and associates would feel unwelcome, let alone have their friends question why they’d be willing to.

    Turns out opinions have consequences and freedom of speech goes hand in hand with freedom of association. People have the right to say whatever they wish. Others have the right not to want to work with them.

  • It’s interesting because for a long time people wanted to work for Elon because he held the moral high ground. “I’ll bring electric cars and space colonization online or die trying.”

    It’s sad to see the shift.

  • This is becoming the problem with all of his businesses - Tesla has a crazy valuation and it really seems like they're having huge trouble getting Robotaxi going in Austin given the very slow progress there.

    • Very few people down here want to ride in them, and I have multiple friends with hilariously disastrous stories.

      Most of the Waymo stories are "Well, it took 15 minutes to arrive, but then it was fine, if a little slow."

      1 reply →

  • Why does being a top AI researcher so often come with this philosophical bent you describe?

    • You are paying the smartest people in the world to think really really hard, and turns out they might also think really really hard about not making the world a worse place

      14 replies →

    • I would think it's because of the staggering money they're making. According to Fortune[0]:

      > Altman said on an episode of Uncapped that Meta had been making “giant offers to a lot of people on our team,” some totaling “$100 million signing bonuses and more than that [in] compensation per year.”

      > Deedy Das, a VC at Menlo Ventures, previously told Fortune that he has heard from several people the Meta CEO has tried to recruit. “Zuck had phone calls with potential hires trying to convince them to join with a $2M/yr floor.”

      If you're making a minimum of $2M/year or even 50x that, you can afford to live according to your values instead of checking them at the door.

      [0] https://archive.ph/lBIyY

      1 reply →

    • My experience with researchers (though not in AI) is that it's a bunch of very opinionated nerds who are mostly motivated by loving a subject. My experience is that most people who think really deeply and care about what they do also care more that their work is prosocial.

      3 replies →

    • Because it is not Macrodata Refinement and you can’t stop them thinking off the clock.

    • This isnt unique to top AI researchers. Top talent has a long history of being averse to authoritarian/despotism at least in part because, by near definition, it must suppress truth. You cant build the future effectively with that approach.

    • Aside from the Maslow’s hierarchy of needs points others are making, I believe it has something to do with the history of AI research.

      There is a big overlap between the “rationalist” and “effective altruist” crowds and some AI research ideas. At a minimum they come from the same philosophy: define an objective, and find methods to optimize that objective. For AI that’s minimizing loss functions with better and better models of the data. For EA, that’s allocating money in ways they think are expectation-maximizing.

      Note this doesn’t apply to everyone. Some people just want to make money.

    • Maybe you’re reading “philosophical bent” as “armchair philosopher”, as in they are dabbling in a field unrelated to their profession and letting it drive their profession - worldview might have made it clearer?

      1 reply →

    • Because they can afford it, they are very sought after.

      And smart people usually have moral convictions.

      I know for some people on this website it's hard to understand, but not everything in life is about $$$

      13 replies →

  • I can't say I know the AI research community well but I'd imagine OpenAIs alignment w/ the military would not align w/ the the personal philosophy of many.

  • What do you mean “philosophical”? Ethics and morals are not required, Elon can get whatever type of asshole he needs. Something else is up.

  • It's worse than that. Elon is a notoriously bad employer, and the only people that put up with him were the people that shared his vision. Pretty much the only people that will work for him now are second rate researchers and people that think gooner AI and racism is a worthwhile mission.

  • > But frontier AI research is a field with a lot of top talent who have strong philosophical motivation for their work

    The "top researchers" in AI are Chinese. And I am skeptical that they even remotely have the philosophical or political alignment you are attempting to project on to them. Neither is a letter published by a few disgruntled employees of a San Francisco based company any kind of evidence or form of consensus.

    • > The "top researchers" in AI are Chinese. And I am skeptical that they even remotely have the philosophical or political alignment you are attempting to project on to them.

      I assure you that Chinese researchers have a diversity of philosophical and political alignment, much the same as other researchers. I also assure you that top researchers as a whole are not all Chinese, though the ones that are that I know are all very thoughtful.

    • > The "top researchers" in AI are Chinese. And I am skeptical that they have even remotely the philosophical or political alignment you are attempting to project on to them.

      What an ugly trope. Idealism motivates Chinese workers just as often as any other nationality.

      2 replies →

Feel like the canary was when Grokpedia became a project.

Giant waste of time while Anthropic/OAI keep surging forward.

I also keep hearing this narrative that Twitter is a good data source, but I cannot imagine it's a valuable dataset. Sure keeping up with realtime topics can be useful, but I am not sure how much of a product that is.

  • The Twitter social graph was an amazing data asset. I worked at a consumer insights firm and the data on followers/followings was quite powerful.

    Using a custom taxonomy of things (celebrities, influencers, magazines, brands, tv shows, films, games, all kinds of things), we could identify groups of people who liked certain things, and when you looked at what those things were, it gave you a way of understanding who those people were.

    With that data, you could work out:

    - What celebrities/influencers to use in marketing campaigns - Where to advertise, and on which tv/radio channels - What potential brands to collaborate with to expand your customer base - What tone of voice to use in your advertising - In some cases, we educated clients about who their actual customers were, better than they understood themselves.

    One scenario, we built a social media feed based on the things that a group of customers following a well-known Deodorant brand in the UK would see.

    When we presented that to the client, they said “Why are there so many women in bikinis in this feed?”

    The brand had repositioned themselves to a male-grooming focussed target market, but had failed to realise that their existing customer base were the ones that had been looking at their TV adverts of women on beaches chasing a man who happened to spray their Deodorant on them. Their advertising from the past had been very effective.

    That was the power of Twitter’s data, and it is an absolute shame that Twitter went the way that it did. Mark Zuckerberg once said that Twitter was like “watching a clown car driven into a gold mine”.

    I’m pretty sure he must be delighted with how things have panned out since.

    • That entire description sounds worthless to any positive direction of humanity. Therefore probably rapaciously profitable

      Very sad face.

    • That Zuckerberg quote was published in 2013 and supposedly was made a year or more before. Was it about when Dick Costolo was CEO (2010-2012)?

    • This reads very dystopian. You are not optimizing to understand people, you are optimizing to weaponize that understanding against them.

      When you know what someone will buy based on exploiting their unconscious preferences, and you are paid to increase sales, you will do it. Especially if your competitors are doing it too.

      And this happens at scale, invisibly. People never see the manipulation.

      In any case, it is not useful for most people. It is useful for the people doing the deceiving.

      4 replies →

    • As an aside that quote from MZ does bother me. There's more to making a web-scale human rights respecting (because it has to, it's the internet, social media needs guidelines) than just making money (which Zuck doesn't seem to care much about anyway if he's sinking apparently billions into metaverse while having no account support)

      Of course he would only see it through the lens of cash. I have no idea how profitable Twitter was under Dorsey but it felt the spirit of the company at first was relatively neutral, it was a tool, it was what Jack came up with

      Zuck replaced people's email addresses[1], the feed has been wildly unchronological for years. Fix some of those problems wrt. lack of user respect and maybe you can make statements like "all else being equal, clown car goal mine". Or was it "dumb fucks"[2]?

      [1] https://news.ycombinator.com/item?id=4151433 [2] https://news.ycombinator.com/item?id=1692122

    • It _was_ a great asset, however, just like models need proper data, as soon as musk removed the clamps on valuable social signals, well, he basically took a dump where he intended to eat.

      1 reply →

  • It’s pretty telling that Elon had to have Grok rewrite Wikipedia because the truth was too woke for him. No idea how anybody can ever take Grok seriously.

    • Many projects in his companies seem to be more and more Musk's vanity projects than ideas/products one can take seriously. This is also how tesla ended up with a huge cybertruck stock that nobody wants to buy and thus had to be bought by his other companies. And it is becoming worse and worse, especially ever since he bought twitter and sped up his twitting rates.

      16 replies →

    • Probably next generations of kids being fed PragerU studying material will. Something tells me we didn't see a fraction of what's going to happen in the decades to come.

    • I take Grokipedia very seriously as a threat to society. Sure, they're happy if people read it and fall for - but the primary goal is not to convince humans, but to influence search results of current models & to poison the training data of future models. ChatGPT (and most likely other models/providers too) is already using Grokipedia as a source, so unless you're aware of the possibility and always careful, you might be served Musks newest culture war ideas without ever being the wiser.

      It's not enough that everyone on Twitter is forced to read his thoughts, he's trying to make sure his influence reaches everyone else too.

      1 reply →

  • Twitter's communication style being based around brevity, slang, memes, spam and non-threaded conversations seems particularly unlikely to be helpful for optimising LLMs

    • >Twitter's communication style being based around brevity

      Is this still true? Every once in a while someone sends a link around to some madman explaining how race or economics or whatever "really" works and it's like a full dissertation with headings, footnotes, clip art. They're halfway to reinventing Grok-o-pedia right there in Twitter. I mean X. I was promised that "X gonna give it to you" but it turns out "it" is some form of brain chlymidia.

      3 replies →

    • > Twitter's communication style [...] seems particularly unlikely to be helpful for optimising LLMs

      This depends on what one wants to optimize the AI for. ;-)

  • Twitter has the mass adoption, and it takes an effort to avoid bot/particular view bias - but as a valuable content source, it's a far cry from what it once was before Musk took it over.

  • > Feel like the canary was when Grokpedia became a project. Giant waste of time while Anthropic/OAI keep surging forward.

    Really? I assumed that that whole thing was just a very direct `for each article in Wikipedia { article = LLM(systemprompt, article) }`

    Agree re Twitter "good" != valuable.

  • AFAIK Grok still doesn’t have a CLI coding agent that works with a subscription. That’s a shame. Grok Code Fast 1 was pretty impressive when it came out - for what it did, and they never followed it up with a new version.

    • You can use cursor with grok, though my experience is that grok is the worst of the API providers cursor supports.

  • > but I cannot imagine it's a valuable dataset.

    It's going to be a mixed batch, but any time there's world events, since as far back as I can think, Twitter (now X) was always first in breaking news. There's plenty of people and news orgs still on X because they need to be for the audience.

  • Twotter as a data source is interesting. I think it gets over hyped because thats elons grift. But i cant deny that the real time info aspect of it is pretty valuable. But i definitely think that its not that much more valuable than the open internet from a context source perspective. Everything worthwhile on twitter will end up elsewhere with a bit of lag. And the stuff that wont is noise anyway

  • I'm not a fan of Elon's software endeavors, ever since he bought Twitter and turned it into an even worse cesspool of angry political nonsense than it used to be. I don't like how he's been biasing Grok, etc.

    But, what exactly is so bad about Grokipedia? It's a different approach and I think a valid one: trying to do with AI what people have been doing manually at Wikipedia. I'm curious to hear the substantive comparisons.

    • I think the issue is simply this: wikipedia trends towards unbiased info through use of the crowd. Grok, with a single owner with an ax to grind, trends towards whatever elon wants. It’s poisoned information under the control of one man - cyberpunk novels have been written about less.

      5 replies →

    • >>I don't like how he's been biasing Grok, etc.

      >>But, what exactly is so bad about Grokipedia

    • It's controlled by a guy who spends all day retweeting white supremacists and lying about his companies. Why should anyone who isn't a white supremacist use it?

      1 reply →

“Orbital space centres and mass drivers on the Moon will be incredible.” - Musk

Right.

The product is the stock. TSLA: [1] Up by 3x in the last two years, despite no new models, the Cybertruck failure, the Robotaxi failure, the large truck failure, and an overall decline in sales. How does he do it?

It's a concern seeing Space-X, which builds good rockets, drawn into the X and AI money drains. Space-X is needed. If X and X/AI tanked, nobody would care.

[1] https://www.cnbc.com/quotes/TSLA

  • Greatest hype man of all time and shows how whacked out reality and economics are.

  • If I was a SpaceX investor I'd be considering litigation. Saying the core product has to be rebuilt right after it gets bought by SpaceX?! Maybe the SpaceX investors would have liked some diligence about that before purchase but looks like someone had a conflict of interest about that.

    • Space-X and x/AI are both privately held.

      But this may mess up the proposed IPO.[1]

      By completing the SpaceX–xAI deal while both companies remain privately held, and now closed, Musk can effectively set relative valuations, negotiate terms within a founder‑controlled ecosystem, close, and then inform investors, without the procedural drag and disclosure obligations that attend a public‑company merger. That flexibility can reduce near‑term execution friction. It does not, however, eliminate fiduciary exposure; rather, it may defer scrutiny to the IPO phase, when investors and regulators will examine how and why the combination occurred, how it was priced, and how related‑party dynamics were managed.

      [1] https://www.dandodiary.com/2026/03/articles/director-and-off...

  • You had the answer right there… SPCX will be the product, what they make will no longer matter.

I feel xAI is just a very big version of the Boring Co. "flamethrower": an unserious endeavor which is just a reskinned existing tool (it was a reskinned weed burner), but people were wowed by it anyway, since Musk was behind it, and they all pretended it was something new and notable.

The burning (heh) question is which SpaceX subsidiary will fail first, xAI or Tesla (not yet a subsidiary, but it's written in the stars (heh))?

Then again SpaceX is also jumping the shark what with their orbital data centers (remember those?).

Might be time to start a new Musk company soon.

I don't use it myself, but I feel like the way Grok is integrated into Twitter is a pretty good thing for discussions, as it is certainly a more objective and rational voice than most human participants. I think it's good that people tag @grok if they don't understand something or want an opinion, even if it looks pretty silly to see "@grok is this true" repeated multiple times in replies.

That said, Musk's attempts at misaligning the thing and make it prefer his opinions of course destroy any trust. It's surprising that it's seemingly as good and helpful as it is despite the corruption attempts.

I also don't quite get how the business model is supposed to work out if its main usecase is to serve Twitter. I know they provide API access as all other models, but with how distrusted Musk is and how sensitive of a topic reliable model behavior is, they seem to sabotage themselves. Which company wants it to go mechahitler on them?

  • I disagree, I find that the grok replies are terrible product UX. Not only do they clog up the replies of every popular post, they're also constrained to extremely short answers with no sources. The community notes system, while also flawed in its own ways, is at least not nearly as disruptive and usually provides a link.

    Trying to make social media a source of truthful information is always an uphill battle and doubly so for X.

  • I’m really, really uninterested in reading AI content that other people have generated. If I’m on Twitter, I’m looking for what humans have to say.

  • Grok is a bot that:

    1) sometimes goes mechahitler

    2) was trained to be biased against empathy and understanding (because woke).

    3) is customized to spout Elon's opinions as fact.

    Claiming it is "objective and rational" seems like a misjudgement to me. If it really is more objective and rational than the average xitter poster, that says more about that platform than it does about Grok.

    • I guess I was mostly arguing that the integration of something like Grok into Twitter was definitely a net positive for online discussion, as anyone has a fact checker and explainer at hand now to diffuse irrational online arguments.

      Also I think you overrate Musk's success in fiddling with the model. As I have written, I also don't like his attempts to tune it to his tastes, but if you see the outputs that people get from Grok, it seems mostly fine except in the specific scenarios that Musk seems to have focused their misalignment on.

      Of course something like Claude being integrated into Twitter would likely be better.

      1 reply →

    • You’re right. But it appears they may have failed with 2) and 3) because I frequently see Grok spit out content that doesn’t agree with the creators’ narrative.

    • > 1) sometimes goes mechahitler

      That "MechaHitler" episode lasted less than a day.

      > 2) was trained to be biased against empathy and understanding (because woke).

      No, it was trained and instructed to be truthful, even if the truth is deemed politically incorrect.

      > 3) is customized to spout Elon's opinions as fact.

      Certainly a nugget of truth there.

      > Claiming it is "objective and rational" seems like a misjudgement to me.

      I do believe it's generally objective, simply due to the fact that despite how much Elon tries to push it to the right, it still dunks on right-wingers all the time when they summon Grok to back up a bullshit story, but Grok debunks it instead.

Used Grok for the first time, in a Tesla, and for that purpose it actually made a lot of sense. It’s very well-integrated into the car’s systems and communication style while driving tends to be very tweet-esque. I think this is the niche they should lean into more (live assistant, e.g. Jarvis type stuff) and leave the more agentic niche to folks like Anthropic. Maybe even delegate more difficult or background tasks to those sorts of models. As a verbal interface I found it pretty pleasant.

  • I thought Grok in the car was awesome until it went off on a tangent and started praising Elon.

  • I am honestly a bit disappointed it couldn't do basic things, like play X on Spotify. To be fair, I accidentally activated Grok for holding the voice command button too long (which is another UX issue - i.e. 2 voice command interfaces).

  • Grok in Tesla is utterly terrible, a rushed out product with a very bad UX. As a simple example, it's the very first feature in Tesla's UI that does not come translated to the UI language set by the user but it's just available in English. Never happened before.

While I believe Grok was a decent model (in some of our internal use cases it performed the best until Gemini 2.5-pro came out), I can't help lament how the team chose to run.

xAI (and Twitter) was the loudest about six-hour workdays, sleeping in the office, and always shipping. ~2 years later it feels like they have nothing to show for it. I'm sure the engineers at Google worked 4 days a week, 2 hours a day, with half of that being spent at the Google cafeteria and they dusted xAI years ago.

  • > I'm sure the engineers at Google worked 4 days a week, 2 hours a day

    Why are you sure of that? Anecdotally everyone I know in and around Google Deepmind works incredibly hard.

    • No disrespect to the Google Deepmind team, but I meant it as a meme. I do not believe most Google employees work 2 hours a day.

      The Google Deepminds are incredibly smart - I just find it important to point out that the xAI guys spent a year assured they would beat Google because they slept in tents that they made in the office.

    • There’s a longstanding meme that Google is full of rest-and-vesters. Maybe it’s true in some departments, but I also have anecdotes that in GDM and other AI-related stuff, people are acutely aware of the existential threat of losing to OpenAI and have the appropriate amount of hustle.

      1 reply →

  • It's almost like burning people out is a bad idea. Fair enough if you're working 12 hour days as employee 1 at a startup but when your boss has more money than God and is working you like a dog you're not going to keep that up (especially when all of those people probably have much better opportunities available to them at the drop of a hat).

  • Anyone Google has hired in the last ~8 years was hired onto a team that is growing and has a culture of shipping and producing. Google regularly weeds out low performers, be it new grads or long timers who started doing the rest and vest thing.

    Now, I don't think most people at google are literally driving to the office or sleeping there most of the time, you'll certainly have more WLB than xAI.

    I'd even say, Google is much better at calibrating the right amount to push people than some other companies.

The irony is that while Wikipedia faces criticism for bias, it remains one of the few massive-scale sites with a clean internal link structure that doesn't feel manipulated by modern SEO 'clustering' tactics. For developers, their API is still a masterclass in how to serve structured data to the public.

These kind of HN submissions test how fair discussions can be here:

> Please don't use Hacker News for political or ideological battle. It tramples curiosity.

Reference: https://news.ycombinator.com/newsguidelines.html

  • > Please don't use Hacker News for political or ideological battle. It tramples curiosity.

    That ship has sailed a long time ago, with the approval of the moderation itself.

    • Yup, since around 2016 HN and other tech spaces got infested with people who cannot separate their political ideology from technical discussions.

      When it comes to FOSS they claim that FOSS has always been political to justify the politicization of everything they touch.

      Things used to be much better when the people adhered to the age-old wisdom "Keep politics and religion out of the office" and carried this attitude to neutral spaces online.

      In part, some of us got into tech because it was one of the places where meritocracy ruled and you could get away from those who thrive by overwhelming others with BS.

      I apologize for the rant.

      1 reply →

  • Is it politics or ideology to recognize the flawed character of someone? How cultish his following is? His erratic behavior, the damage that he's doing?

    Some people will cry "politics" just to take the voice away from those who dare to question their beloved celebrities.

    • Yeah and it’s not our fault every Elon discussion involves politics. It’s literally all he does all day, and all he seems interested in, anymore.

  • Elon is literally a political figure. How is one supposed to discuss his actions without invoking his politics?

  • They trample science, the Paradox of tolerance in action.

    Who fights can lose, who doesn't fight has already lost.

  • So, it utterly fails? A good part of the community still seems to be stuck in 2017 where Elon could do no wrong.

    Turns out a lot of not just wrong, but malice could be done in 9 years. And worse yet, incompetent malice. I don't know why that has to be a political statement these days, but thems the brakes here.

xAI's biggest contribution to the space seems to have been their x-rated image/video model. Hard to see what xAI has to offer against Gemini, Claud, ChatGPT.

  • I'll bite. I think their conversation (voice) model is more fluid than competitors. It's also very good at hitting up twitter for realtime information, and was that way before the current tool use models got fully up and running. Anecdotally, I think it has better theory of mind than its era (gemini 2.5) - I found it a useful issue spotter for negotiations and planning in a way that oAI and claude were not near its launch date. It led the vending bench for some time after launch.

    Taken together, I infer that RL training toward a slightly less homogenous cultural standard than the other frontier AI labs adds some capabilities, or can at times.

    It's quite long in the tooth right now, though. But I'll definitely talk to the next version; I like heterogeneity in the model space, and Grok is very different than the other big three.

  • To be fair I think there's a good usecase there. Someone's gonna do it. People will want it.

    American financial institutions are too prudish for it but money is money. And personally I think there's nothing morally wrong with it (of course within normal restrictions like 18+, consent of portrayed parties etc)

    xAI is getting flak in Europe because they don't obey consent and age, not because it's porn.

    Personally I prefer porn made by real people right now, not just because of quality but because they have character. But I can imagine experiences becoming more interactive that way and that would be nice.

> "AI was not built right first time around, so is being rebuilt from the foundations up"

So Tesla's recent $2 billion investment in xAI was a bad deal?

It looks a lot like a public company is being used to bail out a private one.

  • I'm pretty sure that all these acquisitions have been glorified accounting tricks in order to undo the damage that Musk did when he bought Twitter at an obscenely overvalued price in 2022. Clearly he didn't actually want Twitter at that price, because he tried to back out almost immediately after making the offer, so now he has his accountants do all this glorified money-shifting to effectively "sanitize" his purchase and recover his funds.

> Recruiters have been contacting unsuccessful candidates from previous interviews and assessments to offer them jobs, often on better financial terms, the people said.

I'm not sure those candidates would want to work for xAI after seeing the news and everything unless they desperately need a job right now.

It's not hard to imagine getting laid off or fired weeks if not days after joining the company.

> Toby Pohlen, a former DeepMind researcher, was put in charge of the “Macrohard” project to build digital agents that Musk said could replicate entire software companies. Musk said it was the “most important” drive at the company. The name is a “funny” reference to Microsoft, the billionaire added. Pohlen left 16 days later.

When I was 9 years old, my uncle asked me what I was going to do for work when I got older. I told him I was going to start a company called "MacroHard", and become the richest man alive. He told me that's not how the world works. Turns out it is.

This is veiled speak for "No one wants to work for us, so we need to contact rejected applicants to fill positions".

I use AI for work, but not agentic, at most per method/function using GitHub CoPilot (which has Grok on it).

Grok is at best useful for commenting code.

Maybe they shouldn't have spent so much time trying to make their model have an edgy cringe attitude, Idk.

Obviously catching up to others in agent assisted coding is the motivation for this. But it is also an odd decision in the same way that Meta hiring an AI leader from a data labeling company is odd.

It feels like xAI is perpetually playing catch-up.

They haven't quite committed enough to a novel direction relative to anthropic or OAI, what's described in the OP seems symptomatic of a lack of differentiation.

If you spend all your time judging yourself relative to the incumbents, there will be no time left over to innovate.

The leash is too tight!

I've been saying this for a while, but if I had to use Grok for anything programming-related I'd feel very sad and unproductive. I was playing around with a local TTS model codebase but having some issues getting it to work, so I tried explaining the problem to all the major models to see how they performed. Grok performed the worst by a significant margin, and the worst part was that it easily became stuck trying minor changes that didn't solve the key problem.

If we are to take any claims of Recursive Self Improvement seriously at all, then having a competent coding model seems like a key asset where you need to guarantee that you're remaining competitive. Why wouldn't you make coding models a top priority if you expect it to ultimately help your internal teams become more productive and effective?

There's also not an unlimited supply of researchers and engineers for them to keep burning through people at the rate at which they've been working. Although I guess for people with short timelines it makes sense to sprint hard, while people with longer timelines are more likely to treat this as a marathon. Maybe the years of burning bridges and developing such a toxic reputation are finally catching up to Elon. I think part of the harm that Elon has done is framing all the work in xAI as engineering while being highly dismissive of research, but a lot of research requires running experiments or thinking about problems and exploring them for long periods of time. If you're just grinding out work nonstop you don't really have time to let your mind wander and explore new ideas.

Honestly, I'm surprised they've done such a terrible job with programming. I remember around summer last year it was quite apparent how far behind they were with coding tools, but Elon was posting about taking that domain a bit more seriously. Why didn't any of those efforts materialize into real outputs? Something must be truly dysfunctional inside of xAI for them not to be shipping anything at all, especially considering Elon's propensity to ship undercooked products while continuing to iterate on them, as he has done in many previous cases.

I've noticed that Elon has also gone very hard on social media posting a ton of criticisms against the other big AI company CEOs like Daario Amodei. This suggests to me that he must feel very threatened, otherwise he wouldn't be resorting to such childish behavior. He must feel incredibly frustrated that no amount of money is able to make him more competitive within the AI space.

Their goal of moving compute to space combined with their capacity to launch tons of payload will make this look like a tiny blip.

  • What is the benefit of "moving compute to space"?

    • It's hard for an uprising of poor people to shut it off. It's the ideal place to run your CEO / President simulations.

      I say this tongue in cheek, but in all seriousness, I can't really think of any other benefit, and I no longer have a lot of faith in the good sense of some of the people involved.

      11 replies →

    • > What is the benefit of "moving compute to space"?

      I’ll bite. It’s cheaper and quicker to permit a launch than permit, zone and interconnect a datacenter. And solar panels in space don’t need glass cladding, which makes them cheaper to make and lift.

      The downside is launch cost. But there is a breakeven between these factors that seems to have most of its error bars within Starship’s target. (By my math, around $35/kg.) So if Starship works, and all indications seem to show that it will, eventually, then that puts space-based data centers at cost parity with terrestrial ones within a decade. Which was, well, unexpected when I ran the numbers.

      (The surprising finding when you run the numbers is launching the chips and solar panels isn’t the limiter, it’s launching the radiators. Which opens up whole new questions about at what scale it makes sense to stop sending those up the well.)

      3 replies →

I think it would have been better to have just brought Ashok Elluswamy over and placed him in charge of a group and then tried to just keep the researchers on rather than firing them. It is hard to get anything done if you do not have the talent already onboard.

lol! no surer sign of a junior/naive/ignorant developer or manager than the sentiment "okay, well, let's start from scratch and do it right this time."

big projects generate cruft. there are ways to minimize it, but as you go along there will always be some stuff that doesn't quite mesh with whatever else you've got going on. if you insist on ironing out every single wrinkle (admirable!) you'll never actually deliver a result.

I'm not saying this will fail. green field projects can certainly be a godsend when they produce something better than what they attempt to replace. but they are always a sign of failure. of not being able to work your way out of the mess you made with the first attempt. so that just begs the question: what are you going to do when this attempt gets hard to work with? going to give up and start over again - do it right that time? or...?

[flagged]

  • Many wouldn't, but some people share his values, and given the compensation, it makes saying "no" much harder. Money may not be the most important thing in life, but it does make them extremely easier to live.

  • Same, I earn 60K as a senior, but I would never accept a 200K+ position at xAI.

    • As an US Citizen, you have to pay me to engage with Elon Musk's businesses. He is not a good person and does not deserve respect or admiration.

      1 reply →

  • We had cabinet members for this administration call Trump a nazi months prior to the nomination. People give up all kinds of morals for financial gain. That was always true, but it's become outright blatant this past decade.

  • You wouldn't want to work for a genius? Probably the most significant person alive today?

    • Get down to A&E quick, you've clearly drunk a potentially fatal amount of Elon KoolAid. Musk is a buffoon. Clever? yes by all accounts, genius? Hardly. He's had luck, made good judgments mostly offsetting the bad ones. Most of all he has enough money to power through errors that would bankrupt thee & me.

    • Evidently not genius enough to not have his car business and global image fail. Genius he might be, but he's only entrenching his position in a way not dissimilar to cults: by alienating a lot of people you can get loyalty from a selected few. If that's the kind of power he wants, sure, he's a genius. But a good businessman is something else.

    • Let's assume that you are correct. How is that relevant to how good he is as an employer? There are lots of people in history who were very significant and perhaps geniuses in some way that I wouldn't want to work for in a billion years.

Musk sounds like such a nightmare to work for. I legitimately don't understand why anyone would put up with him. What's the appeal?

  • He has followers -- takes all kinds, eh?

    That said, I'm going to guess that some feel like it's the best choice they have -- the devil they know.

  • (Shrug) He built some awesome companies that did some awesome things. That inspired people, especially at a time when most job opportunities in tech seemed to revolve around selling ads.

    Then he went off the deep end, seemingly around the time when the guy in Thailand insulted his submarine idea. It became clear that he can control trillion-dollar companies but not himself. And, well, life's too short to spend it working for Nazis, nutcases, or both.

It does not surprise me. The free Grok got worse since 4.0, they increasingly save money by not responding at all or only allowing one answer. Grok now defends the administration and billionaires.

The company seems to burn money like crazy. Everyone knows that "AI in space" and the downgrade to a moon trip after claiming for 15 years that Mars is just around the corner are marketing.

All AIs are toys and the coding promises are just a lie to string along investors. Unfortunately many of these are senile Star Trek watchers who buy into everything.

Wait, what does this imply for Cursor? I DGAF about xAI and will never use their Grok, but I did like Cursor more than the alternatives (even if I'm just running opus 4.6 most of the time).

But now he is poaching the two heads of engineering of a company that's trying to move very quickly, how is that going to affect their speed and success?

im not surprised, grok definitely falls behind as both a coding agent and a research tool.

claude codes the best, gpt is the best research tool, and grok is really only great at videos. which isn't a huge loss, but videos don't have the same functional capacity as academic topics and coding

  • > grok is really only great at videos. which isn't a huge loss, but videos don't have the same functional capacity as academic topics and coding

    With the right product leadership, this could actually be a killer app usecase for the entertainment industry as well as human-AI user interface - most people find text and typing to be a counterintuitive user experience (especially those whose day job isn't directly touching code or Excel).

    Additionally, CodeGen as a segment is significantly oversaturated at this point, and in a lot of cases an organization has the ability to armtwist a 4th party data retention guarantee from Anthropic or OpenAI to train their own CodeGen tools (ik one F50 that is not traditionally viewed as a tech company going this route).

    That said, Musk has a reputation of internally overriding experienced product leaders with a track record.

    It's a shame because Grok and xAI had potential, and it wouldn't hurt to have another semi-competitive foundation model player in the US from a redundancy and ecosystem perspective.

It's surprising that AI coding agents have network effects but it's true. Think about it from first principles & you'll realize that the bottleneck is how many people are using it to write real code & providing both implicit (compiler errors, test failures, crash logs, etc) & direct ("did not properly follow instructions", "deleted main databases", "didn't properly use a tool", etc) feedback. No one is using xAI for serious software engineering so that leaves OpenAI, Anthropic, & Google w/ enough scale to benefit from network effects. No one has real AI but what they do have is the appearance of intelligence from crowdsourced feedback & filtering. This means companies that are already in the lead will continue to stay there & xAI started way too late so they will continue to lose in every domain that actually matters & benefits from network effects.

  • Is there really a network effect, though? What’s the moat?

    • If you are using an AI w/ 100 users who are writing throwaway software vs someone who is using AI w/ 1000 users who are writing software w/ formal specifications then guess which AI is going to win? The answer is plainly obvious to me but might not be to those who haven't thought about how current AIs actually work.

@grok is this real?

@grok fire the bottom 50% engineers from x.ai ranked by number of commits per day

@grok generate a hypothetical picture of an Elon who is not under the influence of large amounts of Ketamine

I honestly don't know what to expect from Elon these days. But it's rarely good news.

The Takeover by SpaceX was obviously a Bailout. And now they pressure NASDAQ to change the rules so they can dump their junk into the index funds.

Wow, bit weird that Musk, who must have known about how badly xAI was doing, spent so much of his investors money buying out xAI.

What an enormous blunder.

  • It's how he hides losses though. People who aren't Musk can demand answers to questions he'd like to ignore.

    As it is within the Musk empire, xAI is used to hold up X, Tesla is holding up xAI. And all of that debt is being slowly shuffled to SpaceX.

    • SX investor here: the combined value of SX is well up on the private secondary market post-acquisition. It was value accretive, in very real dollar terms.

    • Even if Starlink had more than a few tens of millions of customers, China mobile has 900 million subs and is worth around $250 billion. ULA was recently valued at about 1 billion. SpaceX might be possibly worth 50 times as much or maybe even 100 times as much. Falcon nine is the world's workhorse rocket, but it's just not that remarkable, and starship is utterly unproven to launch to orbit and land both stages. Starship has a payload capacity problem that must be solved to even get to the point where launching 15 refueling missions would be sufficient to get a starship to get anywhere beyond Earth orbit.

      It looks like the plan is to IPO with a small float (in relative terms) and get all of the retail investor Elon fans to lineup for the rug pull.

      1 reply →

I feel like even just a couple years ago it would have been shocking to see an article involving Musk have this kind of spin. Like you'd never see a line like this:

> The name is a “funny” reference to Microsoft, the billionaire added.

in something from 2023 or earlier.

Not even Elon believes that Cursor is worth $50B or even $29B.

  • If key employees are leaving Cursor to join xAI, I would imagine not even Cursor employees are optimistic about the company’s future valuation.

  • How can cursor be worth more than a few billion? Claude/Codex are already better autonomous SWE-lite replacements. Cognition surely has a better internal harness. Cursor does have a lot of users, I'll give it that.

    • I like Cursor a lot more than Claude Code. It works better for me overall. I like the way they integrate it into the IDE so the agent is my tool rather than a 'partner' or something like that. I'm pretty sad that they lost some engineers, I hope these folks weren't integral to Cursor in any way.

    • Distribution is also important. Cursor is a great normie tool (I’m one of them), with probably more enterprise deals than the competition.

    • Moats are weird right now… but Cursor doesn’t have one at all so I agree it can’t really be worth much.

xAI showed me that it’s really still OAI and Anthropic (which is basically the OG devs). No matter how much money you throw at the problem, the entire space is still in the hands of a few.

dang wrote:

> You may not owe you-know-whom better, but you owe this community better if you're participating in it.

This is like telling a country that’s being invaded that they can only respond with strongly worded letters when their enemy is dropping tactical nukes on them.

But hey, Paul Graham and cronies benefit from the status quo as much as any other billionaire, so let’s not rock the boat, right?

The word “complicit” comes to mind.

[stub for generic-indignant tangents - not what this site is for - please see https://news.ycombinator.com/newsguidelines.html]

He is re-building a company that he himself built less than 3 years ago?

Unfortunate. The Grok team built a phenomenal model. I use it all the time and it very often out performs GPT and Claude, on coding and STEM research related tasks. I was part of the beta for a while Grok 4.2 Beta with multi-agents and it was just amazingly good.

People aren't using it for reasons other than its capabilities. I mean, I don't think my boss would approve a paid Grok subscription for example.

  • > People aren't using it for reasons other than its capabilities.

    This is very true. I have no idea how it performs, as I wouldn't use it even if I was paid for that. Wouldn't matter if it was the best model available, in my view the name is so thoroughly tainted by now that you would get a reputational hit just by admitting to use it.

  • > People aren't using it for reasons other than its capabilities.

    This is a fact of life, though. "Who created it" is a valid and common reason to rule out using a particular product, even one with objectively good quality.

  • Have you tried the 5.3 Codex Xhigh, 5.4 Xhigh, Opus 4.6, Gemini 3.1?

    All of them (even Gemini, the worst of the bunch) far outclass Grok on everything I've thrown at them, especially coding.

    Grok is good at summarizing what's happening on twitter though.

  • My experience was quite different. It was on par with open source models from China (and it was priced as much) and could never replace Sonnet/Opus/GPT5.x.

I do use Grok as a chatbot sometimes. Very good for sourcing X and general web search. Not as "prude" as the others too.

  • Prude? I've played with all the main AI players for the last 2'ish years.

    I've never once thought: you know what? that was a bit prudish.

    Genuinely morbidly curious. What use case do you have where you end up making that conclusion?

    • An earlier version of Sonnet (not sure which one; ~1 yr ago) refused to give me instructions on taking the life of another when I asked something like - "how do I kill a running process by name?"

    • Making funny memes of my friends mainly. ChatGPT won’t touch that, I haven’t tried with Claude yet, but grok keeps the group chat flush with laughing emojis.

      That’s all I use it for really- things out of alignment with the other platforms- which IMO are better on every other metric (except having a sense of humour of course)

      2 replies →

Yes 11 up and everyone why free insult on a model that top adoption. Aligned with your personal view is not ahead of the curve, it's just personal.

The grok button on twitter is pretty awesome. Instantly summarize / explain any tweet, even memes, including replies. Ask follow up questions. Not sure many people know it's there.

Also grok in the Tesla is fun, get answers to questions without looking at a phone. I once had it search up a blog post and read it out to me while driving. The NSFW mode is pretty...disgusting so I leave that off.

I hope they find a way with Optimus or something. FSD is incredible. More competition is a good thing.

It’s so funny to me how much ire Elon draws from the HN crowd. HN in general is a very negative place, but it’s amplified to truly remarkable levels of hysteria when discussing Elon.

Amongst my cadre of mostly founder friends Elon is deeply admired. I’d you have ever tried building something new truly by yourself then you know it is capital H Hard. Getting your teeth kicked in by investors and customers and this bizarre breed of self-righteous people that gain purpose from poking you with a stick.

It’s never clear if a new venture will succeed, but the glee I see here for a stumble is pretty disappointing.

  • So you deeply admire a man who threw a temper-tantrum when his giant box designed by a bunch of people with no experience in anything underwater or rescue, much less underwater rescue - and was deemed unusable to rescue people from an underwater cave with passages so small divers had to remove their gear and push it ahead of them? And repeatedly, directly, said the lead rescuer was a pedophile?

    You deeply admire a man so unable to restrain his ego and temper that much of his production team at Tesla quit, some right to his face, because they couldn't meet his nearly impossible goal of extreme levels of automation on the Model 3 production line? Which, if all else is ignored, cost Tesla billions in delays because of his demands?

    You deeply admire a many who is vehemently racist and misogynistic?

    You deeply admire a man who latches onto just about any conspiracy theory?

    You deeply admire a man who is so desperate for attention he unblocks himself from Twitter users' accounts?

    You deeply admire a man whose companies were under investigation by nearly every federal enforcement agency there is?

    You deeply admire a man who has failed to meet the vast majority of his own publicly stated benchmarks?

    And who engages in PT Barnum levels of bullshit, like having "AI robots" that are actually just robots piloted by unemployed actors?

    The man is a pathological liar who has failed upward not because of some sort of unique talent or skill, but because he's extremely abusive and willing to break any regulation or law he sees as inconvenient.