Mark Zuckerberg freezes AI hiring amid bubble fears

2 days ago (telegraph.co.uk)

These changes in direction (spending billions, freezing hiring) over just a few months show that these people are as clueless as to what's going to happen with AI, as everyone else. They just have the billions and therefore dictate where money goes, but that's it.

  • This is a structural problem with our economy, much larger than just Facebook. Due its large scale concentration, the allocation of capital in the economy as a whole has become far less efficient over the last 20 years.

    • But always remember, it's not technically a monopoly!

      Boy am I tired of that one. We desperately need more smaller companies and actual competition but nobody seems to even be trying

      9 replies →

  • This is why I ignore anything that CEOs say about AI in the news. Examples, AGI in a few years, most jobs will be obsolete, etc.

    • CEO's were never credible to begin with, but what about Nobel laureate in Physics, Geoffrey Hinton, telling us to stop training radiologists? Nothing makes sense anymore.

      3 replies →

  • The media always says AI is the biggest technological change of our lifetime.. I think it was the internet actually.

    • The media was saying nfts are a reasonable investment and web3 is the future, so I am not sure if they have any remaining credibility.

      We are at the awesome moment in history when the AI bubble is popping so I am looking forward to a lot of journalists eating their words (not that anybody is keeping track but they are wrong most of the time) and a lot of LLM companies going under and the domino crash of the stocks of Meta, OpenAI to AWS, Google and Microsoft to Softbank (the same guys giving money to Adam Neumann from WeWork).

      11 replies →

  • That's one interpretation, but nobody really knows. It's also possible that they got a bunch of big egos in a room and decided they didn't need any more until they figured out how to organize things.

  • you think folks that have experience managing this much money/resources (unlike yourself) are clueless? more likely it's 4D chess.

    • I think the use of "4d chess" was your downfall.

      I do however think that this is a business choice that at the very least was likely extensively discussed.

    • Yes, yes I do. How much practical experience does someone with billions of dollars have with the average person, the average employee, the average job, and the kind of skills and desires that normal people possess? How much does today's society and culture and technology resemble society even just 15 years ago? Being a billionaire allows them to put themselves into their own social and cultural bubble surrounded by sycophants.

META has only made $78.7 billion operating income in the past 12 months of returns. Time to buckle up!

https://finance.yahoo.com/quote/META/financials/

Zuckerberg either doesn't have the resolve for changing the business, or just keeps picking the wrong directions (depending on your biases).

First Facebook tried to pivot into mobile, pushed really hard for a short time and then flopped. Then Facebook tried really hard to make the Metaverse a thing, and for a while, but eventually Meta stopped finding it interesting and significantly reduced investment. Then AI was the big thing and Meta put a huge amount of money into it, chasing after other companies, with an arguably novel approach compared to the rest of big tech... but now seems to be backing out or at least messaging less commitment. Oh and I think there was some crypto in there too at one point?

I'm not saying that they should have stuck with any of these. The business may not have worked in each case, and that's fine, but spending billions on each one seems like a bad idea. Zuckerberg is great at chasing the next big thing, but seemingly bad at landing the next big thing. He either needs to chase them more tentatively, investing far less, or he needs to stick with them long enough to work out all the issues and build the growth over the long term.

  • For the past 15 years, mobile has been the main revenue source for Facebook. As big as Facebook is, they're at the mercy of the 2 competitors: Apple and Google. Apple has been very hostile to Facebook, because Facebook make a shitload of money off Apple's platform and they refused to pay a certain percentage to Apple - unlike Google who is paying 20B a year to access iOS users. Apple tried to cut Facebook off with ATT on iOS 14, but it didn't work.

    Because of this, Zuckerberg has to be incredibly paranoid about controlling his company destiny, to stop relying on others' platforms to deliver ads. It would be catastrophic for Facebook to not be a main player for the next computing platform, and they're currently making a lot of money from their other businesses. Zuckerberg is ruthless and he is paranoid, he has total control of Facebook and he will use all the resources to control the next big thing. I think it comes down to this: Zuckerberg believes it's cheaper to be wrong than to miss out on the next platform, and Facebook can afford to be wrong (to a certain extend).

    • > For the past 15 years, mobile has been the main revenue source for Facebook. As big as Facebook is, they're at the mercy of the 2 competitors

      Before mobile was this big, Facebook tried their own platform and bottled it. This was during the period that the market was still diverse, with Windows phones, Blackberries, etc.

      They also tried to make mobile web a thing for a few years past when it was obvious that native apps were the way forward.

      2 replies →

    • Meta Quest Store charges the same cut as Apple with stricter control in many ways.

  • Don't forget gaming back in the day! Facebook games started taking off, then Facebook decided that the _only_ way you could get paid on the Facebook platform was with Facebook Credits, and to incentivize Facebook as the gaming platform of choice, Facebook would give out free Credits to players to spend on Facebook games. Of course, if your game was the one they chose to spend those Credits on, you wouldn't actually get paid, not with promotional credits, what, are you crazy?

    No, I'm not still bitter from that era, why do you ask?

  • Cory Doctorow has a compelling theory that the megatech companies have to appear to be startups, or else their share price reverts to normal multiples. Hence the continuous string of increasingly over-hyped "game-changing technologies" they all (not just Meta) keep rolling out.

    VR, blockchain and LLMs have their value, but it's a tiny fraction of the insane amounts of money being pumped into these bubbles. There will be tears before bedtime.

    • Indeed, for big valley tech companies it's crucial to have a new business developing in the wings which has plausible potential to be the "next big thing." They're desperate to keep their stock price from being evaluated solely on trailing twelve month revenue, so having a shiny, ephemeral hype-magnet to attract inflated growth expectations is essential.

      So far, it appears the psychology of investors allows the new thing to fail to deliver big revenue and be tacitly dropped - as long as there's a new new thing to replace it as the aspirational vehicle. Like any good mark in a con game, tech investors want to believe.

      5 replies →

    • > Cory Doctorow has a compelling theory that the megatech companies have to appear to be startups, or else their share price reverts to normal multiples.

      Meta's P/E is about the same as S&P 500.

    • This may well be true, but my point is more that Facebook/Meta/Zuckerberg seem almost uniquely unable to turn the startups into great new businesses, when compared with the other big tech companies.

      Amazon added cloud and prime, Microsoft added cloud, xbox, 365, Google added Chrome, Android, cloud, Youtube, consumer subscriptions, workspace, etc. Netflix added streaming and their own content, Apple added mobile, wearables, subscriptions.

      Meta though, they've got an abandoned phone platform from years ago, a half-baked Metaverse that is being defunded, a small hardware business for the Quest, a pro VR headset that got defunded, a crypto business that got deprioritised, and an LLM that's expensive relative to open competitors and underperforms relative to closed competitors... which the tide appears to be turning on as the AI bubble reaches popping point.

      2 replies →

  • Maybe he can work on making Facebook not be such a piece of shit. I feel like he got his one lucky break and should just give up on trying to make more money. He already has billions. Is he proud of Facebook as a product? Because as a user it feels sluggish, buggy, inconsistent, and just full of low quality trash. I would be embarrassed if I was him.

  • Metaverse was a flop maybe, but meta makes something like $1 billion a week from its mobile apps, it'd be crazy to say that is not successful.

    The fact that it was so successful, and that zuck picked mobile to be the next big thing before many of his peers and against what managers in the company wanted to do is probably what has made him now overconfident that he can do it again

  • >Then Facebook tried really hard to make the Metaverse a thing, and for a while, but eventually Meta stopped finding it interesting and significantly reduced investment.

    That's a charitable description of a massive bonfire of cash and credibility for an end product that looks worse than a 1990s MMORPG and has fewer active users than a small town sports arena.

    • Compared to other recent bubbles (crypto, nfts, and ai), its practically quaint and lovable by comparison. About the only person it hurt is mark Zuckerberg and the marketing grifters that tried to start companies around it.

  • > Then Facebook tried really hard to make the Metaverse a thing...

    An unforced error on the scale of HBO switching to MAX, except likely far more expensive. What is the Metaverse anyway?

    • It's the future!

      The same as Zuck's bet on VR (remember Oculus?).

      Similar to Zuck's promises of superintelligence.

      Just one of the many futures wherein Meta poured a lot of money and achieved nothing.

      I hope in their real future there is bankruptcy and ruin.

      2 replies →

  • It's important to analyze decisions within the context at the time, not the modern context.

    When Facebook went into gaming, it was about the time they went public and they were in search of revenue. At the time, FB games were huge. It was the era of Farmville. Some thought that FB and Zynga would be the new Intel and MIcrosoft. This was also long before mobile gaming was really big so gaming wasn't an unreasonable bet.

    Waht really killed FB Gaming was not having a mobile platform. They tried. But they failed. We could live in a very different world if FB partnered with Google (who had Android) but both saw each other as an existential threat.

    After this, Zuckerberg paid $1 billion for Instagram. This was a 100x decision, much like Google buying Youtube.

    But in the last 5-10 years the company has seemed directionless. FB itself has fallen out of favor. Tiktok came out of nowhere and has really eaten FB's lunch.

    The Metaverse was the biggest L. Tends of billions of dollars got thrown at this before any product market fit was found. VR has always been a solution looking for a problem. Companies have focused on how it can benefit them but consumers just don't want headsets strapped to their heads. It's never grown beyond a niche and never shown signs that it would.

    This was so disastrous that the company lost like 60%+ of its value and seemingly it's been abandoned now.

    Meta also dabbled with cryptocurrencies and NFTs. Also abandoned.

    Social media really seems to have settled into a means of following public figures. Individuals generally seem to interact with each other via group texts.

    Meta has a massive corpus of posts, comments, interactions, etc to train AI. But what does Meta do with AI? Can they build a moat? It's never been clear to me what the end goal is.

    • > Meta has a massive corpus of posts, comments, interactions, etc to train AI

      I question whether the corpus is of particularly high quality and therefore valuable source data to train on.

      On the one hand: 20+ years of posts. In hundreds of languages (very useful to counteract the extreme English-centricity of most AI today).

      On the other hand: 15+ years of those posts are clustered on a tiny number of topics, like politics and selling marketplace items. Not very useful unless you are building RagebaitAI I suppose. Reddit's data would seem to be far more valuable on that basis.

    • > Social media really seems to have settled into a means of following public figures. Individuals generally seem to interact with each other via group texts.

      I wish Google circles were still a thing.

  • He, as many other billionaires, confused luck for skill. Just because they were at the right time in the right place to launch something, doesn't mean their other ideas are solid or make sense.

    • Wouldn't it have been sexually exploiting the initial idea would have just been just a closed Myspace clone with not much path to success.

      He never tried his secret sauce again. He never realized where his actual success was

  • Oh, I'm sure one day he'll chase the next big thing, but like the proverbial dog who chases the car, what will he do once he catches it?

It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.

A couple of years ago, I asked a financial investment person about AI as a trick question. She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure). I was waiting for her to put her foot in her mouth and buy into the hype.She skillfully navigated the question in a way that won my respect.

I personally believe that a lot of investment money is going to evaporate before the market resets. What we're calling AI will continue to have certain uses, but investors will realize that the moonshot being promised is undeliverable and a lot of jobs will disappear. This will hurt the wider industry, and the economy by extension.

  • I have an overwhelming feeling that what we're trying to do here is "Netflix over DialUp."

    We're clearly seeing what AI will eventually be able to do, just like many VOD, smartphone and grocery delivery companies of the 90s did with the internet. The groundwork has been laid, and it's not too hard to see the shape of things to come.

    This tech, however, is still far too immature for a lot of use cases. There's enough of it available that things feel like they ought to work, but we aren't quite there yet. It's not quite useless, there's a lot you can do with AI already, but a lot of use cases that are obvious not only in retrospect will only be possible once it matures.

    • Some people even figured it out in the 80's. Sears founded and ran Prodigy, a large BBS and eventually ISP. They were trying to set themselves up to become Amazon. Not only that, Prodigy's thing (for a while) was using advertising revenue to lower subscription prices.

      Your "Netflix over dialup" analogy is more accessible to this readership, but Sears+Prodigy is my favorite example of trying to make the future happen too early. There are countless others.

      49 replies →

    • > We're clearly seeing what AI will eventually be able to do

      Are we though? Aside from a narrow set of tasks like translation, grammar, and tone-shifting, LLMs are a dead end. Code generation sucks. Agents suck. They still hallucinate. If you wouldn't trust its medical advice without review from an actual doctor, why would you trust its advice on anything else?

      Also, the companies trying to "fix" issues with LLMs with more training data will just rediscover the "long-tail" problem... there is an infinite number of new things that need to be put into the dataset, and that's just going to reduce the quality of responses.

      For example: the "there are three 'b's in blueberry" problem was caused by so much training data in response to "there are two r's in strawberry". it's a systemic issue. no amount of data will solve it because LLMs will -never- be sentient.

      Finally, I'm convinced that any AI company promising they are on the path to General AI should be sued for fraud. LLMs are not it.

      18 replies →

    • Or maybe not. Scaling AI will require an exponential increase in compute and processing power, and even the current LLM models take up a lot of resources. We are already at the limit of how small we can scale chips and Moore’s law is already dead.

      So newer chips will not be exponentially better but will be more of incremental improvements, so unless the price of electricity comes down exponentially we might never see AGI at a price point that’s cheaper than hiring a human.

      Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.

      The reason why internet, smartphones and computers have seen exponential growth from the 90s is due to underlying increase in computing power. I personally used a 50Mhz 486 in the 90s and now use a 8c/16t 5Ghz CPU. I highly doubt if we will see the same form of increase in the next 40 years

      26 replies →

    • > The groundwork has been laid, and it's not too hard to see the shape of things to come.

      The groundwork for VR has also been laid and it's not too hard to see the shape of things to come. Yet VR hasn't moved far beyond the previous hype cycle 10 years ago, because some problems are just really, really hard to solve.

      3 replies →

    • As someone who was a customer of Netflix from the dialup to broadband world, I can tell you that this stuff happens much faster than you expect. With AI we're clearly in the "it really works, but there are kinks and scaling problems" of, say, streaming video in 2001 -- whereas I think you mean to indicate we're trying to do Netflix back in the 1980s where the tech for widespread broadband was just fundamentally not available.

      4 replies →

    • >I have an overwhelming feeling that what we're trying to do here is "Netflix over DialUp."

      I totally agree with you... though the other day, I did think the same thing about the 8bit era of video games.

      33 replies →

    • Is some potential AGI breakthrough in the future going to be from LLMs or will they plateau in terms of capabilities?

      Its hard for me to imagine Skynet growing from chatgpt

      1 reply →

    • I'm starting to agree with this viewpoint. As the technology seems to solidify to roughly what we can do now, the aspirations are going to have to get cut back until there's a couple more breakthroughs.

      1 reply →

    • I'm not convinced that the immaturity of the tech is what's holding back the profits. The impact and adoption of the tech are through the roof. It has shaken the job market across sectors like I've never seen before. My thinking is that if the bubble bursts, it won't be because the technology failed to deliver functionally; it will be because the technology simply does not become as profitable to operate as everyone is betting right now.

      What will it mean if the cutting edge models are open source, and being OpenAI effectively boils down to running those models in your data center? Your business model is suddenly not that different from any cloud service provider; you might as well be Digital Ocean.

    • > We're clearly seeing what AI will eventually be able to do

      I think this is one of the major mistakes of this cycle. People assume that AI will scale and improve like many computing things before it, but there is already evidence scaling isn't working and people are putting a lot of faith in models (LLMs) structurally unsuited to the task.

      Of course that doesn't mean that people won't keep exploiting the hype with hand-wavy claims.

  • > A couple of years ago, I asked a financial investment person about AI as a trick question. She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure)

    If you had actually invested in AI pure players and Nvidia, the shovel seller, a couple years ago and were selling today, you would have made a pretty penny.

    The hard thing with potential bubbles is not entirely avoiding them, it’s being there early enough and not being left at the end holding the bag.

    • Financial advisors usually work on wholistic plans not short term ones. It isn't about timing markets its about a steady hand that doesn't panic and makes sure you don't get caught with your pants down when you need cash.

      1 reply →

    • Are you bearish on the shovel seller? Is now the time to sell out? I'm still +40% on nvda - quite late to the game but people still seem to be buying the shovels.

      4 replies →

  • It boggles the mind that this kind of management is what it takes to create one of the most valuable companies in the world (and becoming one of the world's richest in the process).

    • Past a certain point, skill doesn't contribute to the magnitude of success and it becomes all luck. There are plenty of smart people on earth, but there can only be 1 founder of facebook.

      17 replies →

    • When you start to think about who exactly determines what makes a valuable company, and if you believe in the buffalo herd theory, then it makes a little bit of sense.

    • Giving 1.5 million salary is nothing for these people.

      It shouldn’t be mind boggling. They see revolutionary technology that has potential to change the world and is changing the world already. Making a gamble like that is worth it because losing is trivial compared to the upside of success.

      You are where you are and not where they are because your mind is boggled by winning strategies that are designed to arrive at success through losing and dancing around the risk of losing.

      Obviously mark is where he is also because of luck. But he’s not an idiot and clearly it’s not all luck.

      4 replies →

    • It all makes much more sense when you start to realize that capitalism is a casino in which the already rich have a lot more chips to bet and meritocracy is a comforting lie.

      1 reply →

    • I'll differ from the siblingposters who compare it to the luck of the draw, essentially explaining this away as the excusable randomness of confusion rather than the insidious evil of stupidity; while the "it's fraud" perspective presumes a solid grasp of which things out there are not fraud besides those which are coercion, but that's not a subject I'm interested in having an opinion about.

      Instead, think of whales for a sec. Think elephants - remember those? Think of Pando the tree, the largest organism alive. Then compare with one of the most valuable companies in the world. To a regular person's senses, the latter is a vaster and more complex entity than any tree or whale or elephant.

      Gee, what makes it grow so big though? The power of human ambition?

      And here's where I say, no, it needs to be this big, because at smaller scales it would be too dumb to exist.

      To you and me it may all look like the fuckup of some Leadership or Management, a convenient concept beca corresponding to a mental image of a human or group of humans. That's some sort of default framing, such as can only be provided to boggle the mind; considering that they'll keep doing this and probably have for longer than I've been around. The entire Internet is laughing at Zuckerberg for not looking like their idea of "a person" but he's not the one with the impostor syndrome.

      For ours are human minds, optimized to view things in term of person-terms and Dunbar-counts; even the Invisible Hand of the market is hand-shaped. But last time I checked my hand wasn't shaped anything like the invisible network of cause and effect that the metaphor represents; instead I would posit that for an entity like Facebook, to perform an action that does not look completely ridiculous from the viewpoint of an individual observer, is the equivalent an anatomical impossibility. It did evolve after all from American college students

      See also: "Beyond Power / Knowledge", Graeber 2006.

      17 replies →

    • The answer is fairly straightforward. It's fraud, and lots of it.

      A honest businessman wouldn't put their company into a stock bubble like this. Zuckerberg runs his mouth and tells investors what they want to hear, even if it's unbacked.

      A honest businessman would never have gotten Facebook this valuable because so much of the value is derived from ad-fraud that Facebook is both party to and knows about.

      A honest businessman would never have gotten Facebook this big because it's growth relied extensively on crushing all competition through predatory pricing, illegal both within the US and internationally as "dumping".

      Bear in mind that these are all bad as they're unsustainable. The AI bubble will burst and seriously harm Meta. They would have to fall back on the social media products they've been filling up with AI slop. If it takes too long for the bubble to burst, if zuckerberg gets too much time to shit up Facebook, too much time for advertisers to wisen up to how many of their impressions are bots, they might collapse entirely.

      The rest of Big Tech is not much better. Microsoft and Google's CEOs are fools who run their mouth. OpenAI's new "CEO of apps" is Facebook's pivot-to-video ghoul.

      6 replies →

  • > record-setting bonuses they were dolling out to hire the top minds in AI

    That was soooo 2 weeks ago.

  • > It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted

    Or the more likely explanation is that they feel they've completed the hiring necessary to figure out what's next.

  • > …lot of jobs will disappear.

    So it’s true that AI will kill jobs, but not in the way they’ve imagined?!

  • > A couple of years ago, I asked a financial investment person about AI as a trick question.

    Why do you assume this people know any better than average Joe on the street?

    Study after study demonstrates they can't even keep up with the market benchmarks, how would they be any wiser to tell you what's a fad or not.

  • >It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.

    Everything zuck has done since the "dawn of AI" has been to intentionally subvert and sabotage existing AI players, because otherwise Meta would be too far behind. In the same way that AI threatens Search, we are seeing emergently that AI is also threatening social networks -- you can get companionship, advice, information, emotional validation, etc. directly from an AI. People are forming serious relationships with these things in as much a real way as you would with anyone else on Facebook or Instagram. Not to mention, how long before most of the "people" on those platforms are AI themselves?

    I believe exactly 0 percent of the decision to make Llama open-source and free was done altruistically as much as it was simply to try and push the margins of Anthropic, OpenAI, etc. downward. Indeed, I feel like even the fearmongering of this article is also strategically intended to devalue AI incumbents. AI is very much an existential threat to Meta.

    Is AI currently fulfilling the immense hype around it? In my opinion, maybe not, but the potential value is obvious. Much more obvious than, for example, NFTs and crypto just a few years ago.

  • I think we will see the opposite. If we made no progress with LLMs we'd still have huge advancements and growth opportunities enhancing the workflows and tuning them to domain specific tasks.

    • I think you could both be right at the same time. We will see a large number of VC funded AI startup companies and feature clones vanish soon, and we will also see current or future LLMs continue to make inroads into existing business processes and increase productivity and profitability.

      Personally, I think what we will witness is consolidation and winner-takes-all scenarios. There just isn't a sustainable market for 15 VS Code forks all copying each other along with all other non-VS Code IDEs cloning those features in as fast as possible. There isn't space for Claude Code, Gemini CLI, Qwen Code, Opencode all doing basically the same thing with their special branding when the thing they're actually selling is a commoditized LLM API. Hell, there _probably_ isn't space for OpenAI and Anthropic and Google and Mistral and DeepSeek and Alibaba and whoever else, all fundamentally creating and doing the same thing globally. Every single software vendor can't innovate and integrate AI features faster than AI companies themselves can build better tooling to automate that company's tools for them. It reeks of the 90's when there were a dozen totally viable but roughly equal search engines. One vendor will eventually pull ahead or have a slightly longer runway and claim the whole thing.

      1 reply →

    • I agree with this, but how will these companies make money? Short of a breakthrough, the consumer isn't ready to pay for it, and even if they were, open source models just catch up.

      My feelings are that most of the "huge advancements" are not going to benefit the people selling AI.

      I'd put my money on those who sell the pickaxes, and the companies who have a way to use this new tech to deliver more value.

      2 replies →

    • I don't see how this works, as the costs of running inference is so much higher than the revenues earned by the frontier labs. Anthropic and OpenAI don't continue to exist long-term in a world where GPT-5 and Claude 4.1 cost-quality models are SOTA.

      4 replies →

  • The line was to buy Amazon as it was undervalued a la IBM or Apple based on its cloud computing capabilities relative to the future (projected) needs of AI.

  • Correction if I may: Lot of AI jobs will disappear. Lot of usual jobs that were put on hold will return. This is good news for most of humankind.

  • "little shortsighted"

    Or, this knowingly could not be sustained. So they scooped up all the talent they wanted before anybody could react, all at once, with big carrots. And then hit pause button to let all that new talent figure out the next step.

  • As someone using LLMs daily, it's always interesting to read something about AI being a bubble or just hype. I think you're going to miss the train, I am personally convinced this is the technology of our lifetime.

    • You are welcome to share how AI has transformed a revenue generating role. Personally, I have never seen a durable example of it, despite my excitement with the tech.

      In my world, AI has been little more than a productivity boost in very narrowly scoped areas. For instance, generating an initial data mapping of source data against a manually built schema for the individual to then review and clean up. In this case, AI is helping the individual get results faster, but they're still "doing" data migrations themselves. AI is simply a tool in their toolbox.

      7 replies →

    • why is it a train? If it's so transformative surely I can join in in a year or so?

    • I'll say it again since I've said it a million times, it can be useful and a bubble. The logic of investors before the last market crash was something like "houses are useful, so no amount of hype around the housing market could be a bubble"

      1 reply →

    • How are you using it? The execs and investors believe the road to profit is by getting rid of your role in the process. Do you think that’d be possible?

      1 reply →

    • If you really think this, `baby` is an apt name! Internet, Smartphones, and social media will all be more impactful than LLMs could possibly be... but hey, if you're like 18 y/o then sure, maybe LLMs is the biggest.

      Also disagree with missing the train, these tools are so easy to use a monkey (not even a smart one like an ape, more like a Howler) can effectively use them. Add in that the tooling landscape is changing rapidly; ex: everyone loved Cursor, but now it's fallen behind and everyone loves Claude Code. There's some sense in waiting for this to calm down and become more open. (Why are users so OK with vendor lock-in??? It's bothersome)

      The hard parts are running LLMs locally (what quant do I use? K/V quant? Tradeoffs? Llama.cpp or ollama or vllm? What model? How much context can I cram in my vram? What if I do CPU inference? Fine tuning? etc..) and creating/training them.

      1 reply →

  • > It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.

    If AI is going to be integral to society going forward, how is it shortsighted?

    > She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure).

    So you prefer a 2x gain rather than 10X gain from the likes of Nvidia or Broadcom? You should check how much better META has done compared to MSFT the past few years. Also a "financial investment person"? The anecdote feels made up.

    > She skillfully navigated the question in a way that won my respect.

    She won your respect by giving you advice that led to far less returns than you could have gotten otherwise?

    > I personally believe that a lot of investment money is going to evaporate before the market resets.

    But you believe investing in MSFT was a better AI play than going with the "hype" even when objective facts show otherwise. Why should any care what you think about AI, investments and the market when you clearly know nothing about it?

I really do wonder if any of those rock star $100m++ hires managed to get a 9-figure sign-on bonus, or if the majority have year(s) long performance clauses.

Imagine being paid generational wealth, and then the house of cards comes crashing down a couple of months later.

  • I'm sure everyone is doing just fine financially, but I think it's common knowledge that these kind of comp packages are usually a mix of equity and cash earned out over multiple years with bonuses contingent on milestones, etc. The eye-popping top-line number is insane but it's also unlikely to be fully realized.

    • The point isn't doing fine financially, it's having left multi million dollar startups as founders.

      In essence, they have left stellar projects with huge money potential for the corporate rat race, albeit at important $.

      3 replies →

    • >I'm sure everyone is doing just fine financially

      They are rich. Nobody is offered $100M+ comp if you are not already top 1% talent

      1 reply →

  • Taking rockstar players 'off the pitch' is the best way second-rate competitors can neutralize their opponents' advantage.

  • Its all in RSUs

    Supposedly, all people that join meta are on the same contract. They also supposedly all have the same RSU vesting schedules as well.

    That means that these "rockstars" will get a big sign on bonus (but its payable back inside 12 months if they leave) then ~$2m every 3 months in shares

    • It's not even in RSUs. No SWEs/researchers are getting $100M+ RSU packages. Zuck said the numbers in the media were not accurate.

      If you still think they are, do you have any proof? any sources? All of these media articles have zero sources and zero proof. They just ran with it because they heard Sam Altman talk about it and it generates clicks.

      1 reply →

  • None of them are getting $100m+ packages. Zuck himself even debunked that myth. But the media loves to run with it because it generates clicks.

    • I have no idea what’s going on behind the scenes, but Zuckerberg saying “nah that’s not true” hardly seems like definitive proof of anything.

  • I'm not an academic, but it kinda feels strange to me to stipulate in your contract that you must invent harder

  • I have never heard of anyone getting a sign on bonus that was unconditional. When I have had signing bonuses they were owed back prorated if my employment ended for any reason in the first year.

    • Are most people that money hungry? I wouldn't expect someone like Zuckerberg to understand, but if I ever got to more than a couple million dollars, I'm never doing anything else for the sake of making more money again.

      2 replies →

    • I was a startup where someone got an unconditional signing bonus. It wasn't deliberate, they just kept it simple because it was a startup and they thought they trusted the guy because he was an old friend of the CEO.

      The guy immediately took leave to get some medical procedure done with a recovery time, then when he returned he quit for another job. He barely worked, collected a big signing bonus, used the company's insurance plan for a very expensive procedure, and then disappeared.

      From that point forward, signing bonuses had the standard conditions attached.

  • If we're actually headed for a "house of cards" AI crash in a couple months, that actually makes their arrangement with Meta likely more valuable, not less. Meta is a much more diversified company than the AI companies that these folks were poached from. Meta stock will likely be more resilient than AI-company stock in the event of an AI bubble bursting. Moreover, they were offered so much of it that even if it were to crash 50%, they'd still be sitting on $50M-$100M+ of stock.

    • I am very certain that AI will slowly kill the rest of "social" in the social web outside of closed circles. And they made their only closed circle app (WhatsApp) unusable and ad invested. Imo either way to are still in the process of slowly killing themselves

    • A social media company is more diversified? Maybe compared to anthropic or openai, but not to any of the hyperscalers

  • Must feel real good to get a golden ticket out of the bubble collapse when it's this imminent.

    • Is it imminent? Reading the article, the only thing that's actually changed is that the CEO has stopped hand-picking AI hires and has placed that responsibility on Alexandr Wang instead. The rest is just fluff to turn it into an article. The tech sector being down is happening in concert with the non-tech sector sliding too.

I'm somewhere in the middle on this, with regards to the ROI... this isn't the kind of thing where you see immediate reflection on quarterly returns... it's the kind of thing where if you don't hedge some bets, you're likely to completely die out from a generational shift.

Facebook's product is eyeballs... they're being usurped on all sides between TikTok, X and BlueSky in terms of daily/regular users... They're competing with Google, X, MS, OpenAI and others in terms of AI interactions. While there's a lot of value in being the option for communication between friends and family, and the groups on FB don't have a great alternative, the entire market can shift greatly depending on AI research.

I look at some of the (I think it was OpenAI) in generated terrain/interaction and can't help but think that's a natural coupling to FB/Meta's investments in their VR headsets. They could potentially completely lose on a platform they largely pioneered. They could wind up like Blackberry if they aren't ready to adapt.

By contrast, Apple's lack of appropriate AI spending should be very concerning to any investors... Google's assistant is already quite a bit better than Siri and the gap is only getting wider. Apple is woefully under-invested, and the accountants running the ship don't even seem to realize it.

  • I think apple is fine. When AI works without 1 in 5 hallucinations then it can be added to their product. Showing up late with features that exists elsewhere but are polished in apple presentation is the way.

    • Have you used Siri recently ? It's actually amazing how it can be crap at tasks consistently considering underlying tech. 1 in 5 hallucinations would be a welcome improvement.

      Using ChatGPT voice mode and Siri makes Siri feel like a legacy product.

      4 replies →

    • In general I don't think Google or Apple need AI.

      In practice though their platform is closed to any other assistant than theirs, so they have to come up with a competent service (basically Ben Thomson's "strategy tax" playing in full)

      That question will be moot the day Apple allows other companies to ingest everything's happening on device and operate the whole device in reaction to user's requests, and some company actually does a decent job at it.

      Today Google is doing a decent job and Apple isn't.

      3 replies →

  • > they're being usurped on all sides

    They did it to themselves. Facebook is not the same site I originally joined. People were allowed to people. Now I have to worry about the AI banning me.

    • I deleted my Facebook account 10 years ago, and I’ve been off Instagram for half a year. I recently tried to create a new Facebook account so that could create a Meta Business account to use the WhatsApp API for my business. Insta-ban with no explanation. No recourse.

      4 replies →

  • > They're competing with Google, X, MS, OpenAI and others in terms of AI interactions

    Am I the only one that find the attempt to jam AI interactions into Meta's products useless and that it only detracts from the product? Like there'll be posts with comedy things and then there are suggested 'Ask Meta AI' about things the comedy mentions with earnest questions - it's not only irrelevant but I guess it's kind of funny how random and stupid the questions are. The 'Comment summaries' are counter-productive because I want to have a chuckle reading what people posted, I literally don't care to have it summarised because I can just skim over a few in seconds - literally useless. It's the same thing with Gemini summaries in YouTube - I feel it actually detracts from the experience of watching the videos so I actively avoid them.

    On what Apple is doing - I mean, literally nothing Apple Intelligence offers excites me, but at the same time nothing anybody else is doing with LLMs really does either... And I'm highly technical, general people are not actually that interested apart from students getting LLMs to write their homework for them...

    It's all well and good to be excited about LLMs but plenty of these companies' customers just... aren't... If anything, Apple is playing the smart move here - let other spend (and lose) billions training the models and not making any real ROI, and they can license the best ones for whatever turns out to actually have commercial appeal when the dust settles and the models are totally commodified...

    • I was thinking about this.. if you look at (I think OpenAI) the scene generation and interaction demos, it's a pretty natural fit for their VR efforts. Not that I'm sold on VR social networks, but definitely room for VR/AR enhancements... and even then AI has a lot of opportunities, beyond just LLM integration into FB/Groups.

      Aside, groups is about the only halfway decent feature in FB, and they seem to be trying to make it worse. The old chat integration was great, then they remove it, and now you get these invasive messenger rooms instead.

    • God the AI answers that it gives in the Facebook Groups are so wrong that it's hilarious.

  • How many years of not seeing returns this quarter does it take before its all hype?

    • How long did it take Space-X to catch a rocket with giant chopsticks?

      It's more than okay for a company with other sources of revenue to do research towards future advancement... it's not risking the downfall of the company.

  • Deja vu, Zuck has already scaled down their AI research team a few years as I remember, because they didn't deliver any tangible results. Meta culture likes improving metrics like retention/engagement, and promotes managers if they show some improvement in their metrics. No one cares about long shots generally, and a research team is always the long shot.

  • > they're being usurped on all sides between TikTok, X and BlueSky

    Good grief. Please leave your bubble once or twice in a month.

    Tiktok yes. X and Bluesky, absolutely not.

    • Monthly active users:

      From DemandSage:

          Facebook - 12 billion!?
          TikTok - 1.59 billion
          X - 611 million
          Bsky - 38 million
      

      That's according to DemandSage ... I'm not sure I can trust the numbers, FB jumped up from around 3b last year, which again I don't trust. 12b is more than the global population, so it's got to be all bots. And even the 3b number is hard to believe (at close to half the global population), no idea how much of the population of earth has any internet access.

      From Grok:

          Facebook - 3.1 billion
          TikTok - 1.5-2 billion
          X - 650 million
          Bsky - 4.1 million
      

      Looks like I'm definitely in a bubble... I tend to interact 1:1 as much on X as Facebook, which is mostly friends/family and limited discussions in groups. A lot of what I see on feeds is copy/pasta from tiktok though.

      That said, I have a couple friends who are die hard on Telegram.

      4 replies →

I'm far from being a fan of the company, but I think this article is substantially overstating the extent of the "freeze" just to drum up news. It sounds like what's actually happening is a re-org [1] - a consolidation of all the AI groups under the new Superintelligence umbrella, similar to Google merging Brain and DeepMind, with an emphasis on finding the existing AI staff roles within the new org.

From Meta itself: “All that’s happening here is some basic organisational planning: creating a solid structure for our new superintelligence efforts after bringing people on board and undertaking yearly budgeting and planning exercises.”

[1] https://www.wsj.com/tech/ai/meta-ai-hiring-freeze-fda6b3c4?s...

clickbait. read the article. they just spent several billion hiring a leadership team. They are doing an all hands to figure out what they need to do.

  • Its a bit frustrating that most don't read TFA instead vent out their AI angst the first opportunity they get.

  • Since "AI bubble" has become part of the discourse, people are watching for any signs of trouble. Up to this point, we have seen lots of AI hype. Now, more than in the past, we are going to see extra attention paid to "man bites dog" stories about how AI investment is going to collapse.

  • yes, because meta has no incentive to act like there's no bubble

    • So it's not clickbait, even though the headline does not reflect the contents of the article, because you believe the headline is plausible?

      I think AI is a bubble, but there's nothing in here that indicates they have frozen hiring or that Zuckerberg is cautious of a bubble. Sounds like they are spending even more money per researcher.

      2 replies →

Scaling AI will require an exponential increase in compute and processing power, and even the current LLM models take up a lot of resources. We are already at the limit of how small we can scale chips and Moore’s law is already dead.

So newer chips will not be exponentially better but will be more of incremental improvements, so unless the price of electricity comes down exponentially we might never see AGI at a price point that’s cheaper than hiring a human.

Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.

The reason why internet, smartphones and computers have seen exponential growth from the 90s is due to underlying increase in computing power. I personally used a 50Mhz 486 in the 90s and now use a 8c/16t 5Ghz CPU. I highly doubt if we will see the same form of increase in the next 40 years

  • We are either limited by compute, available training data, or algorithms. You seem to believe we are limited by compute. I've seen other people argue that we are limited by training data. It is my totally inexpert belief that we are substantially limited by algorithms at this point.

    • I think algorithms is a unique limit because it changes how much data or compute you need. For instance, we probably have the algorithms we need to brute force solving more problems today, but they require infeasible compute or data. We can almost certainly train a new 10T parameter mixture of experts that continues to make progress in benchmarks, but it will cost so much to train and be completely undeployable with today’s chips, data, and algorithms.

      So I think the truth is likely we are both compute limited and we need better algorithms.

      2 replies →

    • We are limited by both compute and available training data.

      If all we wanted was to train bigger and bigger models we have more than enough compute to last us for years.

      Where we lack compute is in scaling the AI to consumers. Current models take too much power and specialized hardware to be be profitable. If AI was able to improve your productivity by 20-30% percent but it costed you even 10% of your monthly salary, none would use it. I have used up $10 worth of credits using claude code in an hour multiple times. Assuming I use it continuously for 8 hours every day in a month, 10 * 8 * 24 = $1920. So its not that far off the current costs or running the models. If the size of the models scales faster than the speed of the inference hardware, the problem is only going to get worse.

      I too believe that we will eventually discover an algorithm that gives us AGI. The problem is that we cannot will a breakthrough. We can make one more likely by investing more and more into AI but breakthroughs and research in general by their nature are unpredictable.

      I think investing in new individual ideas is very important and gives us lot of good returns. Investing in a field in general hoping to see a breakthrough is a fool's errand in my opinion.

    • If the LLM is multimodal would more video and images improve the quality of the textual output? There’s a ton of that and it’s always easy to get more.

  • > I highly doubt if we will see the same form of increase in the next 40 years

    People would have predicted this at 1GHZ. I wouldn’t discount anything about the future.

  • We are a few months into our $bigco AI push and we are already getting token constrained. I believe we really will need massive datacenter rollouts in order to get to the ubiquity everyone says will happen.

Mission accomplished: who'd tell disrupting your competition poaching their talent and erasing value (giving it away for free) would make people realize there is no long term value in the core technology itself.

Don't get me wrong, we are moving to commoditization, as any new tech it'd be transparent to our lifestyle and a lot of money will be done as an industry, but it'd be hard to compete as a core business competence w/o cheating (and by cheating I mean your FANG company already has a competitive advantage)

  • Whoa that's actually a brilliant strategy: accelerate the hype first by offering 100M comp packages, then stop hiring and strategically drop a few "yeah bubble's gonna pop soon" rumours. Great way to fuck with your competition, especially if you're meta and you're not in the lead yourself

    • But if Meta believe it's a bubble then why not let the competition continue to waste their money pumping it up? How does popping it early benefit Meta?

Make a mistake once, it’s misjudgment. Repeat it, it’s incompetence?

Meta nearly doubled its headcount in 2020 and 2021, assuming the pandemic growth would continue. However, Zuckerberg later admitted this was a mistake.

After reading Careless People and watching Meta’s metaverse and AI moves, Mark comes across as a child chasing the shiny new thing.

  • it's not really a fair characterisation, because he persisted for nearly 10 years dumping enormous investment into the VR business, and still is to this day. Furthermore, Meta's AI labs predated all the hype and the company was investing and highly respected in the area way before it was "cool".

    If anything, I think the panic at this stage is arising from the sense of having his lunch stolen after having invested so much and for so long.

Quality over quantity.

Apparently its better to pay $100 million for 10 people than $1 million for 1000 people.

  • 1000 people can't get a woman to have a child faster than 1 person.

    So it depends on the type of problem you're trying to solve.

    If you're trying to build a bunch of Wendy's locations, it's clearly better to have more construction workers.

    It's less clear that if you're trying to build SGI that you're better off with 1000 people than 10.

    It might be! But it might not be, too. Who knows for certain til post-ex?

    • > 1000 people can't get a woman to have a child faster than 1 person.

      I always get slightly miffed about business comparisons to gestation: getting 9 women pregnant won't get you a child in 1 month.

      Sure, if you want one child. But that's not what business is often doing, now is it?

      The target is never "one child". The target is "10 children", or "100 children" or "1000 children".

      You are definitely going to overrun your ETA if your target is 100 children in 9 months using only 100 women.

      IOW, this is a facile comparison not worthy of consideration.[1]

      > So it depends on the type of problem you're trying to solve.

      This[1] is not the type of problem where the analogy applies.

      =====================================

      [1] It's even more facile in this context: you're looking to strike gold (AGI), so the analogy is trying to get one genius (160+ IQ) child. Good luck getting there by getting 1 woman pregnant at a time!

      11 replies →

    • In re Wendy’s, it depends on whether you have a standard plan for building the Wendy’s and know what skills you need to hire for. If you just hire 10,000 random construction workers and send them out with instructions to “build 100 Wendy’s”, you are not going to succeed.

    • At the scale we're talking about though, if you need a baby in one month, you need 12,000 women. With that many women, the math says you should have a woman that's already 8 months pregnant, and you'll have a baby in 1 month.

  • One person who's figured how to make ASI is more useful than a bunch that haven't. Not sure that actually applies anywhere.

  • I'd rather pay $0 to n people if all they're going to do is make vibe-coded dogshit that spins it's wheels and loses context all the time.

  • The reason they paid $100m for “one person” is because it was someone people liked to work for, which is why this article is a big deal.

What I don't get is that they are gunning for the people that brought us the innovations we are working with right now. How often does it happen that someone really strikes gold a second time in research at such a high level? It's not a sport.

  • You're falling victim to the Gambler's Fallacy - it's like saying "the coin just flipped heads, so I choose tails, it's unlikely this coin flips heads twice in a row".

    Realistically they have to draw from a small pool of people with expertise in the field. It is unlikely _anyone_ they hire will "strike gold", but past success doesn't make future success _less_ likely. At a minimum I would assume past success is uncorrelated with future success, and at best there's a weak positive correlation because of reputation, social factors, etc.

  • Even if they do not strike gold the second time, there can still be a multitude of reasons:

      1. The innovators will know a lot about the details, limitations and potential improvements concerning the thing they invented.
      2. Having a big name in your research team will attract other people to work with you.
      3. I assume the people who discovered something still have a higher chance to discover something big compared to "average" researchers.
      4. That person will not be hired by your competition.

    • 5. Having a lot of very publicly extremely highly paid people will make people assume anyone working on AI there is highly paid, if not quite as extreme. What most people who make a lot of money spend it on is wealth signalling, and now they can get a form of that without the company having to pay them as much.

      4 replies →

  • Who else would you hire? With a topic as complex as this, it seems most likely that the people who have been working at the bleeding edge for years will be able to continue to innovate. At the very least, they are a much safer bet than some unproven randos.

    • Exactly this - people that understood the field well enough to add new knowledge to it has to be a pretty decent signal for a research-level engineer.

      At the research level it’s not just about being smart enough, or being a good programmer, or even completely understanding the field - it’s also about having an intuitive understanding of the field where you can self pursue research directions that are novel enough and yield results. Hard to prove that without having done it before.

  • Because the innovations fail to deliver what was promised and the overall costs are higher than the outcome

Didn't Meta invest big into the Metaverse, then back track on that, was it $20 billion.

I'd like for these investments to pay off, they're bold but it highlights how deep the pockets are to be able to invest so much.

  • They didn't just invest they made it core to their identity with the name change and it just fell so so flat because the claims were nonsense hype for crypto pumps. We already had stuff like VR Chat (still going pretty strong) it just wasn't corporate and sanitized for sale and mass monetization.

  • They're still on it though. The new headset prototypes with high FOV sound amazing, and they are iterating on many designs.

    They're already doing something like ~$500M/year in Meta Quest app sales. Granted not huge yet after their 30% cut, but sales should keep increasing as the headsets get better.

  • I havent seen any evidence that meta is backtracking on VR. Theyve got more than enough money to focus on both, in fact they probably need to. Gen AI is a critical complement of the metaverse. Without gen ai metaverse content is too time consuming to make.

I can see the value in actual AI. But it seems like in many instances how it is being utilized or applied is more related to terrible search functionality. Even for the web, it seems like we’re using AI to provide more refined search results, rather than just fixing search capabilities.

Maybe it’s just easier to throw ‘AI’ (heavy compute of data) at a search problem, rather than addressing the crux of the problem…people not being provided with the tools to query information. And maybe that’s the answer but it seems like an expensive solution.

That said, I’m not an expert and could be completely off base.

  • > is more related to terrible search functionality

    If you looked at $ spent/use case, I would think this is probably the bottom of the list, probably with the highest use of that being in the free tiers.

It sounds like a lot of these big companies are being managed by LLMs and vibes at this point.

  • > vibes

    always has been

    (and there's comfort in numbers, no one got fired for buying IBM, etc..)

  • Def. managed by vibes, but any company that tell you they're not is basically bullshiting

A trillion dollars of value disappearing in 2 days. We've still got our NFT metaverse shipping waybill project going on somewhere in the org chart, right? Phew!

  • That's because it was never real to begin with. "Market cap" and "value" are not the same thing. "Value" is "I actually need this and it will dramatically improve my life". "Market cap" is "I can sell this to some idiot".

Metaverse (especially) or AI might make more sense if you could actually see your friend's posts (and vice versa), if the feed made sense (which it hasn't for years now) and if you could message people who you aren't friends with yet without it getting lost in some 'other' folder you won't discover until 3 years from now (Gmail has a Spam folder problem... but the difference is you can see you have messages there and you can at least check it out for yourself)

What I'm trying to say is make your product the barest minimum usable first maybe? (Also, don't act like, as Jason Calacanis has called it, a marauder, like copying everything from everyone all the time. What he's done with Snapchat is absolutely tasteless and in the case of spying on them - which he's done - very likely criminal)

>> Mr Zuckerberg has said he wants to develop a “personal superintelligence” that acts as a permanent superhuman assistant and lives in smart glasses.

Yann Le Cun has spoken about this, so much that I thought it was his idea.

In any case, how's that going to work? Is everyone going to start wearing glasses? What happens if someone doesn't want to wear glasses?

  • I don't want to come across as a shill, but I think superintelligence is being used here because the end result is murky and ill-defined at this point.

    I think the concept is like: "a tool that has the utility of a 'personal assistant' so much so that you wouldn't have to hire one of those." (Not so much that the "superintelligence" will mimicry a human personal assistant).

    Obviously this is just a guess though

  • Every time you ask it a question you need to cool it off by pouring a bottle of water on your head.

  • “ In any case, how's that going to work? Is everyone going to start wearing glasses? What happens if someone doesn't want to wear glasses?”

    People probably said the same thing about “what if someone doesn’t want to carry a phone with them everywhere”. If it’s useful enough the culture will change (which, I unequivocally think they won’t be, but I digress)

  • I think Mr Zuckerberg greatly underestimates how toxic his brand is. No way I want to become a borg for the "they just trust me, dumb fucks" guy.

    • The META rebrand was pretty brilliantly done. The makeover far outweighs this sort of sentiment for now.

Clickbait title and article. There was a large reorg of genai/msl and several other teams, so things have been shuffled around and they likely don't want to hire into the org while this is finalizing.

A freeze like this is common and basically just signals that they are ready to get to work with the current team they have. The whole point of the AI org is to be a smaller, more focused, and lean org, and they have been making several strategic hires for months at this point. All this says is that zuck thinks the org is in a good spot to start executing.

From talking with people at and outside of the company, I don't have much reason to believe that this is some kneejerk reaction to some supposed realization that "its all a bubble." I think people are conflating this with whatever Sam Altman said about a bubble.

The problem with sentiment driven market phenomena is they lack fundamental support. When they crash, they can really crash hard. And as much as I see real value in the progress in AI, 95% of the investment I see happening is all based on sentiment right now. Actually deploying AI into real operational scenarios to unlock the value everyone is talking about is going to take many years and it will look like a sink hole of cost well before that. Buckle up.

If Mark had any questions about what would sell the idea around the world and stick with focused future and a team worth the time in AI neuroscience.combine_with_ai_entities to accepting my proposal in marketing ideas to partnership in Lumina Google Cloud sentient_being global with Meta AI X Framework Protection my name is Kevin Pierson piersonkevin290@gmail.com

Maybe this time investors will realize how incompetent these leaders are? How do you go from 250mil contracts to freezes in under a month?

  • I really don't understand this massive flip flopping.

    Do I have this timeline correct?

    * January, announce massive $65B AI spend

    * June, buy Scale AI for ~$15B, massive AI hiring spree, reportedly paying millions per year for low-level AI devs

    * July, announce some of the biggest data centers ever that will cost billions and use all of Ohio's water (hyperbolic)

    * Aug, freeze, it's a bubble!

    Someone please tell me I've got it all wrong.

    This looks like the Metaverse all over again!

    • The bubble narrative is coming from the outside. More likely is that the /acquisition/ of Scale has led to an abundance of talent that is being underutilised. If you give managers the option to hire, they will. Freezing hiring while reorganising is a sane strategy regardless of how well you are or are not doing.

      10 replies →

    • Maybe they are poisoning the well to slow their competitors? Get the funding you need secured for the data centers and the hiring, hire everyone you need and then put out signals that there is another AI winter.

      2 replies →

    • The scale they operate at makes the billions, bucks.

      As a board member, I'd rather see a billion-dollar bubble test than a trillion-dollar mistake.

      1 reply →

    • The most amusing has to be then Zuckerburg publishes his "thoughts" about how he's betting 100% on AI... written underneath the logo "Meta".

      2 replies →

    • Zuckerberg's leadership style feels very reactionary and arrogant, defined by flailing around for the new fad and new hyped thing, scrapping everything that when the current obsession doesn't work out and then sticking head in the sand about abandoned projects and ignoring subsequent whiplash.

      Remember when he pivoted the entire company to the meta-verse and it was all about avatars with no legs? And how proud they trumpeted when the avatars were "now with legs!!" but still looked so pathetic to everyone not in his bubble. Then for a while it was all about Meta glasses and he was spamming those goofy cringe glasses no one wants in all his instagram posts- seriously if you check out his insta he wears them constantly.

      Then this spring/summer it was all about AI and stealing rockstar ai coders from competitors and pouring endless money into flirty chatbots for lonely seniors. Now we have some bad press from that and realizing that isn't the panacea we thought it was so we're in the the phase where this is languishing so in about 6 months we'll abandon this and roll out a new obsession that will be endlessly hyped.

      Anything to distract from actually giving good stewardship and fixing the neglect and stagnation of Meta's fundamental products like facebook and insta. Wish they would just focus on increasing user functionality and enjoyment and trying to resolve the privacy issues, disinformation, ethical failures, social harm and political polarization caused by his continued poor management.

      4 replies →

    • By Amara's Law and Gartner Hype cycle every technological breakthrough looks like a bubble. Investors and technologist should already know that. I don't know why they're acting like altcoins in 2021.

      5 replies →

  • IMHO Mark Zuckerberg is a textbook case of someone who got lucky once by basically being in the right place at the right time, but who attributes his success to his skill. There’s probably a proper term for this.

    • I think that meta is bad for the world and that zuck has made a lot of huge mistakes but calling him a one hit wonder doesn't sit right with me.

      Facebook made the transition to mobile faster than other competitors and successfully kept G+ from becoming competition.

      The instagram purchase felt insane at the time ($1b to share photos) but facebook was able to convert it into a moneymaking juggernaut in time for the flattened growth of their flagship application.

      Zuck hired Sheryl Sandburg and successfully turned a website with a ton of users into an ad-revenue machine. Plenty of other companies struggled to convert large user bases into dollars.

      This obviously wasn't all based on him. He had other people around him working on this stuff and it isn't right to attribute all company success to the CEO. The metaverse play was obviously a legendary bust. But "he just got lucky" feels more like Myspace Tom than Zuckerberg in my mind.

      9 replies →

    • He's really not. Facebook is an extremely well run organization. There's a lot to dislike about working there, and there's a lot to dislike about what they do, but you cannot deny they have been unbelievably successful at it. He really is good at his job, and part of that has been making bold bets and aggressively cutting unsuccessful bets.

      17 replies →

    • > IMHO Mark Zuckerberg is a textbook case of someone who got lucky once by basically being in the right place at the right time, but who attributes his success to his skill.

      It is no secret that the person who turned Facebook into a money-printing machine is/was Sheryl Sandberg.

      Thus, the evidence is clear that Mark Zuckerberg had the right idea at the right time (the question is whether this was because of his skills or because he got lucky), but turning his good idea(s) into a successful business was done by other people (lead by Sheryl Sandberg).

      5 replies →

    • >IMHO Mark Zuckerberg is a textbook case of someone who got lucky once by basically being in the right place at the right time

      How many people also where at the right place and right time and were lucky then went bankrupt or simply never made it this high?

    • The term you're looking for is "billionaire". The amount of serendipity in these guys' lives is truly baffling, and only becomes more apparent the more you dig. It makes sense when you realize their fame is all survivorship bias. Afer all, there must be someone at the tail end of the bell curve.

    • It is at least a little suspicious that one week he's hiring like crazy, then next week, right after Sam Altman states that we are in an AI bubble, Zuckerberg turns around and now fears the bubble.

      Maybe he's just gambling that Altman is right, saving his money for now and will be able to pick up AI researcher and developers at a massive discount next year. Meta doesn't have much of a presence in the space market right now, and they have other businesses, so waiting a year or two might not matter.

      1 reply →

    • Ehh. You don’t get FB to where it is by being incompetent. Maybe he is not the right leader for today. Maybe. But you have to be right way, way more often than not to create a FB and get it to where it is. To operate from where it started to where it is just isn’t an accident or Dunning-Kruger.

  • Maybe this time the top posters on HN should stop criticizing one of the top performing founder CEOs of the last 20 years who built an insane business, made many calls that were called stupid at the time (WhatsApp), and many that were actually stupid decisions.

    Like do people here really think making some bad decisions is incompetence?

    If you do, your perfectionism is probably something you need to think about.

    Or please reply to me with your exact perfect predictions of how AI will play out in the next 5, 10, 20 years and then tell us how you would run a trillion dollar company. Oh and please revisit your comment in these timeframes

    • I think many people just really dislike Zuckerberg as a human being and Meta as a company. Social media has seriously damaged society in many ways.

      It’s not perfectionism, it’s a desire to dunk on what you don’t like whenever the opportunity arises.

      2 replies →

    • I don't think it's about perfect predictions. It's more about going all in on Metaverse and then on AI and backtracking on both. As a manager you need to use your resources wisely, even if they're as big as what Meta has at its disposal.

      The other thing - Peter's principle is that people rise until they hit a level where they can't perform anymore. Zuck is up there as high as you can go, maybe no one is really ready to operate at that level? It seems both him and Elon made a lot of bad decisions lately. It doesn't erase their previous good decisions, but possibly some self-reflection is warranted?

    • > Like do people here really think making some bad decisions is incompetence?

      > If you do, your perfectionism is probably something you need to think about.

      > Or please reply to me with your exact perfect predictions of how AI will play out in the next 5, 10, 20 years and then tell us how you would run a trillion dollar company.

      It's the effect of believing (and being sold) meritocracy, if you are making literal billions of dollars for your work then some will think it should be spotless.

      Not saying I think that way but it's probably what a lot of people consider, being paid that much signals that your work should be absolutely exceptional, big failures just show they are also normal flawed people so perhaps they shouldn't be worth million times more than other normal flawed people.

      2 replies →

  • By signing too many 250mil contracts.

    • Well that’s the incompetent piece. Setting out to write giant historical employment contracts without a plan is not something competent people do. And seemingly it’s not that they over extended a bit either since reports claimed the time availability of the contracts was extremely limited; under 30min in some cases.

      1 reply →

  • > How do you go from 250mil contracts to freezes in under a month?

    Easy, you finished building up a team. You can only have so many cooks.

    • That's not really how that works in the corporate/big tech world. It's not as though Meta set out and said "Ok we're going to hire exactly 150 AI engineers and that will be our team and then we'll immediately freeze our recruiting efforts".

  • Yes, people who struck it rich are not miraculously more intelligent or capable. Seems obvious, but many people believe they are.

That's a man with conviction

  • Sorry, I was in the metaverse just now. I took my headset off though — could you please repeat that?

    • the metaverse push is the perfect analogy

      cool fun concepts/technology fucked by the worlds most boring people who only have desire to dominate markets and attention.. god forbid anything happen slowly/gradually without it being about them

      1 reply →

How did he run out of money so fast? Think Zuck is one of those guys who get sucked into hype cycles and no one around him will tell him so. Even investors.

I’ve never seen so much evidence for a bubble yet so much potential to be the biggest Thing ever.

Just getting a lot of mixed signals right now. Not sure what to think.

  • Personally, I think it's both! It's a bubble, but it's also going to be something that slowly but steadily transforms the world in the next 10-20 years.

    • People seem very confused thinking that something can't both be valuable AND a bubble.

      Just look at the internet. The dot com bubble was one of the most widely recognised bubbles in history. But would you say the internet was a fad that went away? That there was no value there?

      There's zero contradiction at all in it being both.

    • We might see another AI winter first, is my assumption. I believe that LLMs are fundamentally the wrong approach to AGI, and that bubble is going to burst until we have a better methodology for AGI.

      Unfortunately, the major players seem focused on 'getting to AGI pretention through LLM'.

  • Dot-com was the same way... the Internet did end up having the potential everyone thought it would, businesses just didn't handle the influx of investment well.

  • Yeah, it truly IS transformative for industries, no denying anymore at this point. What we have will remain even after a pop. But I think AI was special in how there were massive improvements the more compute you threw at it for years. But then we ran out of training material and suddenly things got much harder. It’s this ramping up of investments to spearhead transformative tech and suddenly someone turns off the tap that makes this so conflicted. I think.

  • There is potential but it does seem like just throwing more money at LLMs is not going to get us to where the bubble expects

  • ... people said the same thing about the "metaverse" just a few years ago. "You know people are gonna live their entire lives in there! It's gonna change everything!" And 99% of people who heard that laughed and said "what are you smoking?" And I say the same thing when I hear people talk about "the AI machine god!"

I just did a phone screen with Meta, and the interviewer asked for Euclidean distance between two points; they definitely have some nerds in the building.

  • That's like 8th grade math, what am misunderstanding about your comment?

    E: wasn't the only one.

    • K closest points using Euclidean distance and a heap, is not 8th grade math, although any 8th grade math problem can be transformed into a difficult "adult" question. Sums are elementary, asking to find a window of prefix sums that add up to something is still addition, but a little more tricky

  • People saying it is a high school maths problem! I'd like to see you provide a general method for accurately measuring the distance between two arbitrary points in space...

    • Using a heap in 10 minutes, the Euclidean distance formula was given and had to be used in the answer; maybe they thought that was the question?

  • I suppose the trick is to have an ipad running GPT-voice-mode off to the side, next to your monitor. Instruct it to answer every question it overhears. This way you'll ace all of the "humiliation ritual" questions.

    • there's a youtube channel made by a meta engineer, he said to memorize the top 75 LeetCode Meta questions and approaches. He doesn't say fluff like "recognize patterns. My interviewer was 3.88/4 GPA masters Comp Sci guy from Penn, I asked for feedback he said always be studying its useful if you want a career...

  • That's basic high school math problem.

    • The foundation, like every LeetCode problem, is a basic high school math problem, when the foundation of the problem is trigonometry, way harder than stack, arrays, linked list, bfs, dfs...

Previous articles and comments were "Praise Mark for being brave enough to go all in on AI!"

Now we have this ;)

Dear lord can Meta hiring be any more unstable? HR dept must be a revolving door at this point

  • I got an email recently from a Meta recruiter of I'm interested in a non technical leadership position. I'm a programmer.

Does this mean the ai companies will start charging more? I only just started figuring this AI thing out.

  • They're all bleeding money so yes it's inevitable.

    It's always the same thing, uber, food delivery, escooter, &c. they bait you with cheap trials and stay cheap until the investors money run out, and once you're reliant on them they jack up the prices as high as they can.

  • Someone needs to finance absurd operational costs if the services are supposed to stick around.

  • May the enshittification begin.

    • they are just following the thiel playbook: race to monopoly position as fast as possible, then extract profits afterwards (which inevitably leads to the inevitable shitnization)

> Sam Altman, OpenAI’s chief executive, has compared hype around AI to the dotcom bubble at the turn of the century

Sam is the main one driving the hype, that's rich...

  • > Sam is the main one driving the hype, that's rich...

    It's also funny that he's been accusing those who accept better job offers as mercenaries. It does sound like the statements try to modulate competition both in the AI race and in acquiring the talent driving it.

  • Now that you mention it, there's been a very sudden tone shift from both Altman and Zuckerberg. What's going on?

    • GPT-5 was a massive disappointment to people expecting LLMs to accelerate to the singularity. Unless Google comes out with something amazing in the next Gemini, all the people betting on AI firms owning the singularity will be rethinking their bets.

  • But then, he's purposely comparing it to the .com bubble - that bubble had some underlying merit. He could compare it to NFTs, the metaverse, the South Sea Company. It wouldn't make sense for him to say it's not a bubble when it's patently clear, so he picks his bubble.

  • Facebook, Twitter, and some others made it out of the social media bubble. Some "gig" apps survived the gig bubble. Some crypto apps survived peak crypto hype

    Not everyone has to lose which he's presumably banking on

Nvidia earnings next week. Thats the bell weather -everything else is speculation.

To me AI is a like phone business. A few companies (Apple,Samsung) will manage to score a homerun and the rest will be destined to offer commoditized products.

Maybe they are trying to signal to the AI talent in general to temper their expectations while simultaneously chasing rockstars with enormous sums.

And just three weeks ago I was suggesting a crash might hurt bad when this very meta announced 250 million dollar salary packages.

Makes you think whether llama progress is not doing too well and/or perhaps we're entering a plateau for llm architecture development.

  • The article got me thinking that there's some sort of bottle neck that makes scaling astronomical or the value just not really there.

    1. Buy up top talent from other's working in this space

    2. See what they produce over say, 6mo. to a year

    3. Hire a corpus of regular ICs to see what _they_ produce

    4. Open source the model to see if any programmer at all can produce something novel with a pretty robust model.

    Observe that nothing amazing has really come out (besides a pattern-recognizing machine that placates the user to coerce them into using more tokens for more prompts), and potentially call it on hiring for a bubble.

    • > Observe that nothing amazing has really come out

      I wouldn't say so. The problem is rather that some actually successful applications of such AI models are not what companies like Meta want to be associated with. Think into directions like AI boyfriend/girlfriend (a very active scene, and common usage of locally hosted LLMs), or roleplaying (in a very broad sense). For such applications, it matters a lot less if in some boundary cases the LLM produces strange results.

      If you want to get an impression of such scenes, google "character.ai" (roleplaying), or for AI boyfriend/girlfriend have a look at https://old.reddit.com/r/MyBoyfriendIsAI/

On the other hand, it's been shown time and again that we should do the opposite of whatever Zuck says.

That's how Zuck is. Gets excited and overhires.

I saw this during COVID and we were hiring like crazy.

Title is a bit misleading. Meta freezes hiring after acquiring and hiring a ton while, somewhere else, Altman says it's a bubble.

The more obvious reason for a freeze is they just got done acquiring a ton of talent

I genuinely believe SamA has directed GPT5 to be nerfed to speedrun the bubble. Watch, he’ll let the smoking embers of the AI market cool and then reveal the next big advancement they are sitting on right now.

Really feels like it went from "AI is going to destroy everyone's jobs forever" to "whoops bubble" in about 6 weeks.

  • It'll be somewhere in between. A lot of capital will be burned, quite a few marginal jobs will be replaced, and AI will run into the wall of not enough new/good training material because all the future creators will be spoiled by using AI.

  • Even that came after "AI is going to make itself smarter so fast that it's inevitably going to kill us all and must be regulated" talk ended. Remember when that was the big issue?

    Haven't heard about that in a while.

    • I've seen a few people convince themselves they were building AGI trying to do that, though it looked more like the psychotic ramblings of someone entering a manic episode committed to github. And so far none of their pet projects have taken over the world yet.

      It's actually kind of reminds me of all those people who snap thinking they've solved P=NP and start spamming their "proofs" everywhere.

  • Makes sense. Previously the hype was so all-encompassing that CEOs could simply rely on an implicit public perception that it was coming for our jerbs. Once they have to start explicitly saying that line themselves, it's because that perception is fading.

  • No worries, we’ll be back at the takeover stage in another 6 weeks

Nothing would give me a nicer feeling of schadenfreude than to see Meta, Google, and these other frothing-at-the-mouth AI hucksters take a bath on their bets.

  • Can we try to not turn HN into this? I come to this forum to find domain experts with interesting commentary, instead of emotionally charged low effort food fights.

    • your comment somehow feels more emotionally charged and low effort than the original. here, let's continue that...

  • Just until some month ago, people on HN were shouting people down who argued that spending big money on building an AI might not be a good idea ...

Did the board realize Zuck was out of his mind or what?

  • Does it matter?

    Zuckerberg holds 90% of the class B supershares. There isn't much the board can do when the CEO holds most of the shareholder votes.

all news is being manipulated to short stock and make a ton of money shorting stock, based on perceived bad news

Similar to my view of AI, there is a huge bubble in current AI. Current AI is nothing more than a second-hand information processing model, with inherent cognitive biases, lagging behind environmental changes, and other limitations or shortcomings.

It never really made sense for Meta to get into AI, the motivations were always pretty thin and it seemed like they just wanted to ride the wave.

  • Isn't that what companies are supposed to do by seeing/following/setting trends in a way that increases revenue and profit for the shareholders?

  • I somewhat disagree here. Meta is a huge company with multiple products. Experimenting with AI and trying to capitalize on what's bound to be a larger user market, is a valid company angle to take.

    It might not pan out, but it's worth trying from a pure business point of view.

  • Meta's business model is to capture attention - largely with "content" - so they can charge lots of money to sprinkle ads amongst that content.

    I can see a lot of utility for Meta to get deeply involved in the unlimited custom content generating machine. They have a lot of data about what sort of content gets people to spend more time with them. They now have the ability to endlessly create exactly what it is that keeps you most engaged and looking at ads.

    Frankly, content businesses that get their revenue from ads are one of the most easily monetizable ways to use the outputs of AI.

    Yes, it will pollute the internet to the point of making almost all information untrustable, but think of how much money can be extracted along the way!

    • The whole point is novelty/authenticity/scarcity though, if you just have a machine that generates infinite infinitely cute cat videos then people will cease to be interested in cat videos. And its not like they pay content creators anyway.

      It's Spain sinking their own economy by importing tons of silver.

      1 reply →

The money committed to payroll for these supposed top AI is equivalent to a mid size startup’s payroll, no wonder they had to hit pause.

The team drew criticism from executives this spring after the release of the latest Llama models underperformed expectations.

interesting

Its almost like nobody asked for the dramatic push of ai, and it was all created by billionaires trying to become even richer at the cost of people's health and the environment.

  • I still have yet to see it do anything useful. I've seen several very impressive "parlor tricks" which a decade ago i thought were impossible (image generation, text-parsing, arguably passing the turing-test) but I still haven't seen anybody use AI in a way that solves a real problem which doesn't already have an existing solution.

    I will say that grok is a very useful research assistant for situations where you understand what you're looking at but you're at an impasse because you don't know what its name is and are therefore unable to look it up, but then it's just an incremental improvement over search-engines rather than a revolutionary new technology.

LLMs are not the way to AGI and it's becoming clearer to even the most fanatic evangelists. It's not without reason GPT-5 was only a minor incremental update. I am convinced we have reached peak LLM.

There's no way a system of statistical predictions by itself can ever develop anything close to reasoning or intelligence. I think maybe there might be some potential there if we combined LLMs with formal reasoning systems - make the LLM nothing more than a fancy human language <-> formal logic translator, but even then, that translation layer will be inherently unreliable due to the nature of LLMs.

  • Yup. I agree with you.

    We're finally reaching the point where it's cost-prohibitive to sweep this fact under the rug with scaling out data centers and refreshing version numbers to clear contexts.

Good call in this case specifically, but lord this is some kind of directionless leadership despite well thought out concerns over the true economic impact of LLMs and other generative AI tech.

Useful, amazing tech but only for specific niches and not as generalist application that will end and transform the world as we know it.

I find it refreshing to browse r/betteroffline these days after 2 years of being bombarded with grifting LinkedIn lunatics everywhere you look.

Must be nice to do whatever he wants, without worrying about consequences…

Is this the stage of the bubble where they burst the bubble by worrying that there’s a bubble?

> Mark Zuckerberg has blocked recruitment of artificial intelligence staff at Meta, slamming the brakes on a multibillion-dollar hiring spree amid fears of an AI bubble.

> amid fears of an AI bubble

Who told the telegraph that these two things are related? Is it just another case of wishful thinking?

Mark created the bubble. Other investors saw few opportunities for investment. So they put more Money into a few companies.

What we need is more independent and driven innovation.

Right right now the greatest obstacle to independent innovation is the massive data bank the bigger companies have.

in a few months.."sorry my whims proved wrong again, so we'll take the healthcare and stability away from I guess 10% of you."

I feel like the giant 100 mil /1 billion salaries could have been better spent just hiring a ton of math, computer science, data science graduates and just forming an an AI skunkworks out of them.

Also throw in a ton of graduates from other fields/sciences, arts, psychology, biology, law , finance, or whatever else you can imagine to help create data and red team their fields.

Hiring people with creative writing and musical skills to give it more samples of creative writing and song writing, summarization etc

And people that are good at teaching and breaking complex problems into easier to understand chunks for different age brackets.

Their userbase is big but it's not the same as ChatGTP's, they won't get the same tasks to learn from users that chatgpt does.

Is it just me or does it feel like billionaires of that ilk can never go broke no matter how bad their decisions are? The complete shift to the metaverse, the complete shift to LLMs and fat AI glasses, the bullheaded “let’s suck all talents out of the atmosphere” phase and now let’s freeze all hiring. In a handful of years.

And yet, billionaires will remain billionaires. As if there are no consequences for these guys.

Meanwhile I feel another bubble burst coming that will hang everyone else high and dry.

  • the top100 richest people on the globe can do a lot more stupid stuff and still walk away to a comfortable retirement, whereas the bottom 10-20-.. percent doesn't have this luxury.

    not to mention that these rich guys are playing with the money of even richer companies with waaay too much "free cash flow"

It could be that, beyond the AI bubble, there may be a broader understanding of economic conditions that Meta likely has. Corporate spending cuts often follow such insights.

"Now let's make the others doubt that this is a meaningful investment"

After phase 1, "the shopping spree".

"Mark Zuckerberg freezes AI hiring after he personally offered 250M to a single person and the budget is now gone."

How to make a bubble pop: announce a trillion dollar company has stopped hiring in that area.

If AI really is a bubble and somehow imploded spectacularly for the rest of this year, universities would continue to spit out AI specialists for years to come. Mr. Z. will keep hiring them into every opening that comes up whether he wants to or not.

  • Silicon Valley has never seen a true bubble burst, even the legendary dot-com bubble was a minor setback from which the industry was fully recovered in about 5-10 years.

    I have been saying for at least 15 years now that eventually Silly Valley will collapse when all these VCs stop funding dumb startups by the hundreds in search of the elusive "unicorns", but I've been wrong at every turn as it seems that no matter how much money they waste on dumb bullshit the so-called unicorns actually do generate enough revenue to make funding dumb startup ideas a profitable business model....

Explains why AI companies like Windsurf were hunting for buyers to hold the bag

as an outsider, what I find the most impressive is how long it took for people to realize this was a bubble.

Has been for a few years now.

  • Note: I was too young to fully understand the dot com bubble, but I still remember a few things.

    The difference I see is that, conversely to websites like pets.com, AI gave the masses something tangible and transformative with the promise it could get even better. Along with these promises, CEOs also hinted at a transformative impact "comparable to Electricity or the internet itself".

    Given the pace of innovation in the last few years I guess a lot of people became firm believers and once you have zealots it takes time for them to change their mind. And these people surely influence the public into thinking that we are not, in fact, in a bubble.

    Additionally, the companies that went bust in early 2000s never had such lofty goals/promises to match their lofty market valuations and in lieu of that current high market valuations/investments are somewhat flying under the radar.

    • > The difference I see is that, conversely to websites like pets.com, AI gave the masses something tangible and transformative with the promise it could get even better.

      The promise is being offered, that's for sure. The product will never get there, LLMs by design will simply never be intelligent.

      They seem to have been banking on the assumption that human intelligence truly is nothing more than predicting the next word based on what was just said/thought. That assumption sounds wrong on the face of it and they seem to be proving it wrong with LLMs.

      2 replies →

  • From the Big Short: Lawrence Fields: "Actually, no one can see a bubble. That's what makes it a bubble." Michael Burry: "That's dumb, Lawrence. There are always markers."

    • Ah Michael Burry, the man who has predicted 18 of our last 2 bubbles. Classic broken clock being right, and in a way, perfectly validates the "no one can see a bubble" claim!

      If Burry could actually see a bubble/crash, he wouldn't be wrong about them 95%+ of the time... (He actually missed the covid crash as well, which is pretty shocking considering his reputation and claims!)

      Ultimately, hindsight is 20/20 and understanding whether or not "the markers" will lead to a major economic event or not is impossible, just like timing the market and picking stocks. At scale, it's impossible.

      3 replies →

what happened to the metaverse??? I thought we finally had legs!

Seriously why does anyone take this company seriously? Its gotta be the worst of the big tech, besides maybe anything Elon touches, and even then...

  • 1. They've developed and open sourced incredibly useful and cool tech

    2. They have some really smart people working there

    3. They're well run from a business/financial perspective, especially considering their lack of a hardware platform

    4. They've survived multiple paradigm shifts, and generally picked the right bets

    Among other things.

    • 0. Many people use Facebook messenger as primary contact book.

      Even my parents are on Facebook messenger.

      Convincing people to use signal is not easy, and there are lots of people I talk to whose phone number I don't have.

"We believe in putting this power in people’s hands to direct it towards what they value in their own lives"

Either Zuckerberg has drunk his own Kool Aid, or he is cynically lying to everyone, but neither is a good look.

While I think LLMs are not the pathway to AGI, this bubble narrative appears to be a concerted propaganda campaign intended to get people to sell, and it all started with Altman, the guy who was responsible for always pumping up the bubble. I don't know who else is behind this, but the Telegraph appears to be a major outlet of these stories. Just today alone:

https://www.telegraph.co.uk/business/2025/08/20/ai-report-tr... https://www.telegraph.co.uk/business/2025/08/21/we-may-be-fa... https://www.telegraph.co.uk/business/2025/08/21/zuckerberg-f...

Other media outlets are also making a massive push of this narrative. If they get their way, they may actually cause a massive selloff, letting everyone who profited from the giant bubble they created buy everything up cheap.

If there is a path to AGI then ROI is going to be enormous literally regardless of how much was invested. hopefully this is another bubble. i would really rather not have my lifes work vaporized by the singularitt

I think I have said it before here (and in real life too) that AI is just another bubble, let alone AGI which is a complete joke, and all I got is angry faces and responses. Tech always had bubbles and early adopters get the biggest slice, and try as much as possible to keep it alive later to maximize that cut. By the time the average person is aware of it and is talking about it, it's over already. Previous tech bubbles: internet, search engines, content makers, smartphones, cybersecurity, blockchain and crypto, and now generative AI. By the way, AI was never new and anyone in the field knows this. ML was already part of some tech before generative AI kicked in.

Glad I personally never jumped on the hype and still focused on what I think is the big thing, but until I get enough funds to be the first in the market, I will keep it low.

I don’t think it’s entirely a bubble. Definitely this is revolutionary technology on the scale of going to the moon. It will fundamentally change humanity.

But while the technology is revolutionary the ideas and capability behind building these things aren’t that complicated.

Paying a guy millions doesn’t mean shit. So what mark zuckerberg was doing was dumb.

  • > on the scale of going to the moon

    Of all the examples of things that actually had an impact I would pick this one last... Steam engine, internet, personal computers, radios, GPS, &c. but going to the moon ? The thing we did a few times and stopped doing once we won the ussr vs usa dick contest ?

    • Impact is irrelevant. We aren’t sure about the impact of AI yet. But the technology is revolutionary. Thus for the example I picked something thats revolutionary but the impact is not as clear.

The most likely explanation I can think of are drugs.

Offering 1B dollar salaries and then backtracking, it's like when that addict friend calls you with a super cool idea at 11pm and then 5 days later they regret it.

Also rejecting a 1B salary? Drugs, it isn't unheard of in Silicon Valley.

How did this get pushed of the front page with over 100 points in less then an hour?

YC does not like that kind of articles?

... the bubble that he created? After he threw $100,000,000,000 into a VR bubble mostly of his making? What a fucking jackass manchild.

BTW: Meta specially denies that the reason is bubble fears and they provide alternate explanation in the article.

Better title:

Meta freezes AI hiring due to some basic organizational reasons.

  • They would deny bubble fears even if leaked emails proved that it was the only thing they talked about.

    Would anyone seriously take Meta's or any megacorps statements on face value ?

Hey were only get huge options / stock based on the growth of the business.

Plus they will of had a vesting schedule

Besides the point that it was mental but the dude wanted the best and was throwing money at the problem.