Hypercapitalism and the AI talent wars

3 days ago (blog.johnluttig.com)

“The AI capital influx means that mega-projects no longer seem outlandishly expensive. This is good for the world!”

Is it? This whole piece just reads of mega funds and giga corps throwing ridiculous cash for pay to win. Nothing new there.

We can’t train more people? I didn’t know Universities were suddenly producing waaaay less talent and that intelligence fell off a cliff.

Things have gone parabolic! It’s giga mega VC time!! Adios early stage, we’re doing $200M Series Seed pre revenue! Mission aligned! Giga power law!

This is just M2 expansion and wealth concentration. Plus a total disregard for 99% of employees. The 1000000x engineer can just do everyone else’s job and the gigachad VCs will back them from seed to exit (who even exits anymore, just hyper scale your way to Google a la Windsurf)!

  • > This is just M2 expansion and wealth concentration.

    I just want to point that there's no scientific law that says those two must move together.

    A government very often needs to print money, and it's important to keep in mind that there's no physical requirement that this money immediately must go to rich people. A government can decide to send it to poor people exactly just as easily as to rich. All the laws forbidding that are of the legal kind.

    • All true of course, with a clarification: even if all the newly printed money is put in the hands of the poor, the resultant inflation raises the prices of hard assets which are overwhelmingly held by the rich (fueling wealth inequality)

      2 replies →

  • There is something immoral about a company saying "we are going to spend $100bn this year on building a LLM"

    The icing on the cake is when all their competitors say "so will we".

  • I think the author should've clarified that this is purely a conversation about the platform plays. There will be 100's of thousands of companies on the application layer, and mini-platform plays that will have your run-of-the-mill new grad or 1x engineer.

    Everything else is just executives doing a bit of dance for their investors ala "We won't need employees anymore!"

    • Optimistic take. What's the say the application layer won't be dominated by the big players as they hoover up anything remotely promising. Leaving the crumbs on the mini-platform .. ala Apple App Store ...

      Sorry, I'm pessimistic as recent experience is one of hyper concentration of ideas, capital, talent, everything.

    • O great so just the most important companies that are inherently monopolistic gate keepers. It's worked out so well with all these current platforms totally not restricting access and abusing everyone else with rent seeking and other anti competitive behaviors.

  • Ironically, there's no M2 expansion going on since Covid days and M2 to GDP is back to what it was pre-Covid and overall didn't even increase much at all even since GFC. It's only 1.5x of what it was at the bottom in 1997 when cost of capital was much higher than today. I think this concern is misplaced.

    • M2 is the wrong statistic for sure, but the thrust of GP's comment is accurate, IMO. Fed intervention has not remotely been removed from the economy. The "big beautiful bill" probably just amounts to another round of it (fiscal excess will lead to a crisis which will force a monetary bailout).

      2 replies →

    • Yeah I just threw out M2 because it's easily understood / harped on but it's certainly much more complicated than that.

  • >We can’t train more people? I didn’t know Universities were suddenly producing waaaay less talent and that intelligence fell off a cliff.

    University enrolment is actually set to sharply decline. It's called the Demographic Cliff: https://www.highereddive.com/news/demographic-cliff-colleges...

    • Off topic, but: Supply and demand says that, if university enrollment drops sharply, the price of a university education should go down. That sounds like good news to me.

      4 replies →

  • If AI is important, speeding up development of it is good.

    > We can’t train more people?

    Of course people are being trained at Universities. Outside of The Matrix, it takes a few years for that to complete.

    • Speed of development of something important isn't necessarily good. Humans are bad at absorbing a lot of change at once and it takes time to recognize and mitigate second-order effects. There's plenty of benefit to the systems that disruptors operate within (society) to not moving as fast as possible... of course since our economic systems don't factor in externalities, we've instead turned all of society into a commons.

      8 replies →

  • if you're targetting $200M, then I guess each round is to hire one or two engineers for one year lol

    I'm curious if you're one of these AI engineers getting 100m, do you quibble over the health insurance? I mean at that point you can fully fund any operation you need and whatever long term care you need for 50 years easily.

    • "Yes sorry I'm turning down your $100M because I need 2 parking spots for my sidecar" :p

  • I agree with your sentiment.

    > This is just M2 expansion

    What is "M2 expansion"?

  • Fwiw universities are producing less talent. They have been getting hammered with budget shortfalls thanks to Trump cutting research funding and this manifests into programs accepting fewer students and professors being able to fund fewer students.

    • They are producing less talent as industry defines it. It is because a large percentage of the people who could teach them anything useful (to industry) are actually employed by industry, and don't want to work in academia.

      Another complication is academia simply does not have the resources to remotely compete with the likes of Google, OpenAI, Anthropic, xAI, etc.

    • Sure but that's new as of a few months. The university I went to still accepts the same number of students per year as it has for many years. Those numbers don't change much without significant land / program expansion which is certainly being cut now.

  • > mega funds and giga corps throwing ridiculous cash for pay to win

    > This is just M2 expansion and wealth concentration

    I actually think "throwing ridiculous cash" _reduces_ the wealth concentration, particularly if a chunk of it is developer talent bidding war. This is money that had been concentrated, being distributed. These over-paid developers pay for goods or services from other people (and pay income taxes!). Money spent on datacenters also ends up paying people building and maintaining the data-centers, people working in the chip and server component factories, people developing those chips, etc etc. Perhaps a big chunk ends up with Jensen Huang and investors in NVidia, but still, much is spent on the rest of the economy along the way.

    I don't feel bad about rich companies and people blowing their money on expensive stuff, that distributing the wealth. Be more worried of wealthy companies/people who are very efficient with their spending ...

    • > I actually think "throwing ridiculous cash" _reduces_ the wealth concentration, particularly if a chunk of it is developer talent bidding war.

      Very few developers are benefiting from this "talent bidding war". Many more developers are being let go as companies decide to plow more of their money into GPUs instead of paying developers.

  • [flagged]

    • Wealth is power. Power decides what we use our collective time and resources on. The more concentrated power is, the less we use that time and resources on things that improves the life of the average person, and more on things that matter to the few with wealth.

      2 replies →

    • I’m going to assume that this is just some edgy post, but you should read up on the relationship between wealth inequality and corruption, social mobility, and similar factors.

    • > simply because opposite of this is the prevailing normie zeitgeist

      We'd love to be able to have reasoned discussions about economics and the pros and cons of wealth concentration vs redistribution here. But this is not the way to do it, and the comment led to an entirely predictable flamewar. Please don't do this on HN, and please make an effort to observe the guidelines, as you've been asked to do before.

      https://news.ycombinator.com/newsguidelines.html

    • Man it's hard to read stuff like this on the internet. When has wealth concentration ever been a good thing? Wealth is power and power leads to abuse almost universally.

https://medium.com/@villispeaks/the-blitzhire-acquisition-e3...

> Blitzhires are another form of an acquisition.. not everybody may be thrilled of the outcome.. employees left behind may feel betrayed and unappreciated.. investors may feel founders may have broken a social contract. But, for a Blitzhire to work, usually everybody needs to work together and align. The driver behind these deals is speed. Maybe concerns over regulatory scrutiny are part of it, but more importantly speed. Not going through the [Hart-Scott-Rodino Antitrust Act] HSR process at all may be worth the enormous complexity and inefficiency of foregoing a traditional acquisition path.

From comment on OP:

> In 2023–2024, our industry witnessed massive waves of layoffs, often justified as “It’s just business, nothing personal.” These layoffs were carried out by the same companies now aggressively competing for AI talent. I would argue that the transactional nature of employer-employee relationships wasn’t primarily driven by a talent shortage or human greed. Rather, those factors only reinforced the damage caused by the companies’ own culture-destroying actions a few years earlier.

2014, https://arstechnica.com/tech-policy/2014/06/should-tech-work...

> A group of big tech companies, including Apple, Google, Adobe, and Intel, recently settled a lawsuit over their "no poach" agreement for $324 million. The CEOs of those companies had agreed not to do "cold call" recruiting of each others' engineers until they were busted by the Department of Justice, which saw the deal as an antitrust violation. The government action was followed up by a class-action lawsuit from the affected workers, who claimed the deal suppressed their wages.

Also, the people being hired now for insane sums of money, are being hired because they have deep knowledge in design / implementation of AI models and infrastructure that scale to billions of users.

In order to operate on a scale like that, you obviously need to have worked somewhere that has that magnitude of users. That makes the pool of candidates quite small.

It’s like building a spaceship. Do you hire the people that have only worked on simulations, or do you try to hire the people that have actually been part of building the most advanced spaceship to date? Given that you’re also in a race against other competitors.

  • With all these mega-offers going out, I object when people saying that they’re paying for “talent”.

    These AI folks are good, but not orders of magnitude better than engineers and researchers already working in tech or academia. Lots of folks are capable of building an AI system, the reason they haven’t is that they haven’t been in a situation where they have the time/money/freedom to do it.

    These mega offers aren’t about “talent”, they are about “experience”

    • > These mega offers aren’t about “talent”, they are about “experience”

      Well, yes.

      Talent doesn't exist in the form people would like to believe, and to whatever degree it does, experience is the most reliable proxy for identifying it.

    • As one of the many people who are fairly experienced in AI (but at small startups) and hasn't had Zuck personally knock on my door, I have had a few moments of "wait, 9 figure salary? Can I at least get 7 figures?"

      But the truth is it's not "just" about experience. Most of these people have been pushing the limits of their fields for their entire careers. It's not like having "the time/money/freedom" to do it is randomly distributed even among talented, smart people. The people in this talent pool where all likely aggressive researchers in a very specialized field, then likely fought hard to get on elite teams working close to the metal on these massive scale inference problems, and they continued to follow this path until they got where they are.

      And the truth is, if you're at least "good" in this space, you do get your piece of the pie at the appropriate scale. I'm still making regular dev income, but my last round of job searching (just a few months ago) was insane. I had to quit my job early because I couldn't manage the all the teams I was talking to. I've been through some hot tech markets over my career, but never anything like this. Meanwhile many of my non-AI peers are really struggling to find new roles.

      So there's no reason to cast shade on the genuine talent of these people (though I think we all feel that frustration from time to time).

    • > These mega offers aren’t about “talent”, they are about “experience”

      I'm sorry, but what's the specific distinction? When the Lakers pay Lebron $54MM per season, is that for his innate talent, or is it for the 20k hours he's spent perfecting his game?

      This is a lot of hand-wringing over nothing. We've seen people paid outrageous sums of money for throwing a ball for DECADES without any complaints, but the moment a filthy computer nerd is paid the same money to build models, it's pitchforks time.

      The only thing wrong with the current compensation kerfuffle is that it happened so late. People like Einstein, Von Neumann, Maxwell, Borlaug, etc should have been compensated like sportsball stars, as well.

  • > Also, the people being hired now for insane sums of money, are being hired because they have deep knowledge in design / implementation of AI models and infrastructure that scale to billions of users.

    That's what they want you to believe, and in some cases that's true. Many though are just grifters. They were able to:

    1. Gain access to the right people at the right levels to have the right conversations.

    2. Build on that access to gain influence focused on AI hype

    3. Turn that access/influence into income

    That doesn't necessarily imply /anything/ about their actual delivery performance or technical prowess.

> If the top 1% of companies drive the majority of VC returns

The fact that the author brings this up and fails to realize that the behavior of current staff shows we have hit or have passed peak AI.

Moores Law is dead and it isn't going to come through and make AI any more affordable. Look at the latest GPU's: IPC is flat. And no one is charging enough to pay for running (bandwidth, power) of the computer that is being used, never mind turning NVIDA into a 4 trillion dollar company.

> Meta’s multi-hundred million dollar comp offers and Google’s multi-billion dollar Character AI and Windsurf deals signal that we are in a crazy AI talent bubble.

All this signals is that those in the know have chosen to take their payday. They don't see themselves building another google scale product, they dont see themselves delivering on samas vision. They KNOW that they are never going to be the 1% company, the unicorn. It's a stark admission that there is NO break out.

The math isnt there in the products we are building today: to borrow a Bay Area quote there is no there there. And you can't spend your way to market capture / a moat, like every VC gold rush of the past.

Do I think AI/ML is dead. NO, but I dont think that innovation is going to come out of the big players, or the dominant markets. Its going to take a bust, cheap and accessable compute (fire sale on used processing) and a new generation of kids to come in hungry and willing to throw away a few years on a big idea. Then you might see interesting tools and scaling down (to run localy).

The first team to deliver a model that can run on a GPU alongside a game, so that there is never an "I took an arrow to the knee" meme again is going to make a LOT of money.

  • > the 10x engineer meme doesn’t go far enough – there are clearly people that are 1,000x the baseline impact.

    Plenty out there who want authors like this believing it enough to write it

    • Obviously the specifics are going to depend on exactly how a team pegs story points, but if an average engineer delivers 10 story points during a two week sprint, then that would mean that a 1000x engineer would deliver 10000 story points, correct? I don't see how someone can actually believe that.

      10 replies →

  • > The first team to deliver a model that can run on a GPU alongside a game, so that there is never an "I took an arrow to the knee" meme again is going to make a LOT of money.

    this feels like a fundamental misunderstanding of how video game dialogue writing works. it's actually an important that a player understand when the mission-critical dialogue is complete. While the specifics of a line becoming a meme may seem undesirable, it's far better that a player hears a line they know means "i have nothing to say" 100 times than generating ai slop every time the player passes a guard.

    • > this feels like a fundamental misunderstanding of how video game dialogue writing works.

      Factorio, Dwarf Fortress, Minecraft.

      There are plenty of games where the whole story is driven by cut scenes.

      There are plenty of games that shove your quests into their journal/pip boy to let you know how to drive game play.

      Dont get me wrong, I loved Zork back in the day (and still do) but we have evolved past that and the tools to move us further could be there.

      6 replies →

  • > The first team to deliver a model that can run on a GPU alongside a game, so that there is never an "I took an arrow to the knee" meme again is going to make a LOT of money.

    "Local Model Augmentation", a sort of standardized local MCP that serves as a layer between a model and a traditional app like a game client. Neat :3

> If the top 1% of companies drive the majority of VC returns, why shouldn’t the same apply to talent? Our natural egalitarian bias makes this unpalatable to accept, but the 10x engineer meme doesn’t go far enough – there are clearly people that are 1,000x the baseline impact.

https://www.youtube.com/watch?v=0obMRztklqU

  • It took me a while to get that it is satire. I tried to figure out the rules ... becouse it is satire?

I find the current VC/billionaire strategy a bit odd and suboptimal. If we consider the current search for AGI as something like a multi-armed bandit seeking to identify “valuable researchers”, the industry is way over-indexing on the exploitation side of the exploitation/exploration trade-off.

If I had billions to throw around, instead of siphoning large amounts of it to a relatively small number of people, I would instead attempt to incubate new ideas across a very large base of generally smart people across interdisciplinary backgrounds. Give anyone who shows genuine interest some amount of compute resources to test their ideas in exchange for X% of the payoff should their approach lead to some step function improvement in capability. The current “AI talent war” is very different than sports, because unlike a star tennis player, it’s not clear at all whose novel approach to machine learning is ultimately going to pay off the most.

  • In fact this is much more optimal when looking at history. Strangely, success often comes from dark horses. But it makes sense, since you can't have paradigm shifts by maintaining the paradigm. Which is what happens when you hyper focus on a few individuals (who you generally pick by credentials).

    The optimal strategy is to lean on the status quo but also cast your net far and wide. There's a balance of exploration/exploitation, but exploitation feels much safer. Weirdly you need to be risky and go against the grain if you want you play it safe.

    With the money these companies are throwing around we should be able to have a renaissance of AI innovations. But we seem to just want to railroad things. Might as well throw the money down the drain.

  • > If I had billions to throw around, instead of siphoning large amounts of it to a relatively small number of people, I would instead attempt to incubate new ideas across a very large base of generally smart people across interdisciplinary backgrounds.

    I had an interesting conversation with an investor around the power vs knowledge dynamic in the VC world and after a few hours we'd basically reinvented higher education with reverse tuition. Defining a general interest or loose problem space and then throwing money over a wall to individuals excited about exploring the area seems wasteful until you look at the scale of failed projects.

  • Agreed, and I suspect the explanation is that these plays are done not to search for a true AGI, but to drive up hype (and 'the line').

    • The higher the line goes, the higher the expected value of return on investment. It’s a virtuous cycle based on a bet on all horses, but since the EV is so high for first mover advantage for AGI, it might be worth it to overleverage compared to the past for your top picks? These are still small sums for Zuckerberg to pay personally, let alone for Meta to pay. This is already priced in.

      4 replies →

  • We saw it happen already with Deekseek.

    SV has already thrown it down the memory hole but for a good three months, until everyone else copied their paper, the SOTA reasoning model available to the public was open source, Communist[0] and came out of a nearly defunct Chinese hedge fund.

    [0] If you don't believe the communist part just ask it about the American economy.

  • yep. and even the sports analogy doesn't fully explain what's going on. if we are talking "true" AGI with potential to replace people wholesale their strategy is telling in that they aren't optimizing for the "end game". maybe it's a factor of just gathering all the mindshare/hype/resources and THEN they can go actually figure it out /s.

    it would be like if you were looking to train the next tennis star that had the ability to basically upend the entire game as we know it. maybe you saw a few people with a unique way of playing that were dominating an order of magnitude higher. you DEF would see teams and coaches having open tryouts and trying very unconventional things for anyone they could find that had promise.

    for the record i think "AI" is not hype and is changing the way things are done permanently, but it's yet to be seem whether all these spent billions can actually meet the expected return (AGI). it's hard to separate out the true innovations from the obvious grift/money grab also going on.

> The French had a uniquely high Gini coefficient before the Revolution.

I feel like this one line captures the elephant in the room that the author is trying hard to convince himself isn't there...

  • French aristocrats didn't have trillion dollar industries brainwashing the population to be on their side, nor did they have AI powered armies to defend them when the people rose up.

> AI catch-up investment has gone parabolic, initially towards GPUs and mega training runs. As some labs learned that GPUs alone don't guarantee good models, the capital cannon is shifting towards talent.

So, no more bitter lesson?

  • Those who dont learn the lesson of the last grift are doomed to grift over and over again.

The full bodied palate of this AI market mirrors the sharp nose of 2023 AI doomerism.

The argument goes: if AI is going to destroy humanity, even if that is a 0.001% chance, we should all totally re-wire society to prevent that from happening, because the _potential_ risk is so huge.

Same goes with these AI companies. What they are shooting for, is to replace white collar workers completely. Every single white collar employee, with their expensive MacBooks, great healthcare and PTO, and lax 9-5 schedule, is to be eliminated completely. IF this is to happen, even if it's a 0.001% chance, we should totally re-wire capital markets, because the _potential reward_ is so huge.

And indeed, this idea is so strongly held (especially in silicon valley) that we see these insanely frothy valuations and billion dollar deals in what should be a down market (tremendous macro uncertainty, high interest rates, etc).

AI doomerism seemed to lack finish, though. Anyone remember Eliezer Yudkowsky? Haven't heard from him in a while.

How much do top professional athletes make? Soto, Ohtani, Mbappé, Messe…

Multi-year contracts north of $500m. Perhaps this is the direction we’re headed in.. there will be many that won’t make it to the majors.

  • If it is anything like professional sports, then the leading companies should start hiring talent as early as possible. Might as well offer $1m to any and all fresh grads and researchers, before competitors can bag them.

> But why didn’t pricing for top talent run up sooner?

Because before ChatGPT, nobody on a board of directors saw the possibility. Now, it's all they can think about.

Aren’t most of these deals locked-up stock deals? With lengthy vesting times, and performance based clauses.

The signing bonuses are probably more than enough for regular people to retire, but these researchers and execs being poached aren’t exactly average Joe’s making $50k/year prior to being poached.

These "talent wars" are overblown and a result of money having nowhere else to go. People are banking on AI and robotics for human progress to take off and that's just a result of all other ventures fizzling out with this left for capital to migrate to.

If you talked to any of these folks worth billions they arent particularly smart, their ideas not really interesting. it took us a few years to go from gpt-3 to deepseek v3 and then another few years to go from sonnet 4 to kimi k2, both being open source models on way lower funding. This hints at a deeper problem than what "hypercapitalism" suggests. In fact, it suggests that capital distribution as of its current state is highly inefficient and we are simply funding the wrong people.

Smart AI talent aren't going to out there constantly trying to get funding or the best deals. They would want to work. Capital is getting too used to not doing the ground work to seek them out. Capital needs to be more tech savvy.

VCs and corporate development teams don't actually understand the technology deeply enough to identify who's doing the important work.

  • I wonder at what point this becomes like guaranteed salaries in sports, like prominent NBA players, where you work hard to get the salary. And then once you've made it, you are basically done, and it's hard to get up and motivate yourself. You've got acolytes and groupies distracting you, you're flush with cash without ever having really shipped anything or made any money. You're busy giving TED talks...

    At that point, are you the 18-year-old phenom who got the big payday and sort of went downhill from there?

    I imagine the biggest winners will be the ones who were doubted, not believed in, and had to fight to build real, profitable companies that become the next trillion-dollar companies.

    Not that it would be bad to be Mark Cuban, but Mark Cuban is not Jeff Bezos.

    And for posterity, I respect Mark Cuban. It's just that his exit came at a time when he was fortunate, as he got his money without having to go all the way through to the end.

  • I think one of the main issue is that the 10x or 100x talents in AI have not yet really show their value yet. None of these AI companies are making any money, and they are valued over highly successful and profitable companies out there because of their "potentials". ARR is nothing if you sell goods valued at 1 dollar for 90 cents.

Am I missing something here? What AI products will actually give you $+100T investment returns or whatever investors dream about?

  • none. especially since once AGI is created the economy as we now it ceases to exist

    • But the way we measure money is by what it can buy (the basket of goods). Surely we'll still need food and clothing even after AGI, so can still measure wealth by number of burritos (1 burrito = $12 in 2025 USD)

Must be nice to be able to ride such a wave and take your share. The money investors are throwing around these days is just insane. I remember it was considered a lot of money when Webvan got 400 million as investment during the .COM bubble. These days this seems nothing.

> It breaks down the existing rules of engagement, from the social contract of company formation, to the loyalty of labor, to the duty to sustain an already-working product, to the conflict rules that investors used to follow.

WTF is this guy hallucinating about? None of that ever existed.

"Silicon Valley built up decades of trust – a combination of social contracts and faith in the mission. But the step-up in the capital deployment is what Deleuze would call a deterritorializing force, for both companies and talent pools. It breaks down the existing rules of engagement, from the social contract of company formation, to the loyalty of labor, to the duty to sustain an already-working product, to the conflict rules that investors used to follow."

Stopped taking this thing seriously with blurbs like the above. If anyone thinks that Silicon Valley was somehow previously ruled by some magical altruism that has now been forsaken, they're in a little cloud of their own. The motives have always been more or less the same and even many of the people too, and there's no mysterious corrupting force that made any of that different then or now.

More money flowed in, technology developed more inroads into more people's lives and thus, the surface area over which the essential nature of tech business (like any business really) could be revealed more clearly expanded. This post is partly deluded.

There's so much here, and not necessarily in a good way. The way this guy talks sounds a lot like those old effective alturist arguments that went along the lines of "Well if there's a 1% chance we can save a billion lives a thousand years in the future, that's actually better than saving 100 lives today". Ignoring the fact that "1%" wasn't an estimate that you could have any confidence in.

Sure, if Deepmind could save a few percentage points on their data centres that would be huge! Becuase you've taken a small number you have no basis for (a few percentage points) and timesed it by the largest number you can find! Hey Presto! Big number! But then surely the guys at Google are morons right - because they only bought 1 Deepmind, they should've been throwing hundreds of millions around willy nilly! At these savings they can't afford not to!

Secondly, it might be true that it's difficult for you to compete with these companies that are hiring in teams of researchers for hundreds of millions, but what you're also doing is handing employees hundreds of millions of dollars. What are they going to do with that money other than throw it into angel investing? You're literally sowing the most fertile ground for startups in history.

I think we should actually be viewing this blow up in compensation in the context of the hangover of ZIRP and COVID. ZIRP basically made money in silicon valley free, tech companies could hire anyone they wanted at almost any comp and as long as there was growth there were no discount factors so they could effectively make infinite time horizon bets. Then covid happened and helicopter money came in to keep the economy going and Tech hired like crazy massively bloating lots of companies. But as things returned to normal, it became obvious that hiring had just been spending, and the returns weren't there for it. I think it's going to become clear over the long term that the same is happening here, Tech has tonnes of money so they're going to spend it, but 3 years down the line someone is going to do the accounting and I would bet you we end up back in the same spot that we did with Tech hiring in Covid - a long and painful unwind as companies have to return to reality.

I think it's unfortunate that the term "capitalism" has been captured by the left to mean the bad kind of capitalism, where regulation is only used as a moat for the established players. Capitalism as a whole is the least bad economic system for prosperity, but the least bad version of capitalism is something like the Nordic model, with good taxation and redistribution policies and consumer protections. But the term itself is poisoned, at least in U.S. politics, to where social democrat/liberal capitalists like Bernie call themselves socialists instead.

  • But the term itself was created and captured by the left from the beginning; Proudhon first used it; Marx popularized it; so in the history of terminologies it always had the meaning that we associate with it.

    I like the term "market economy" or "commercial society" more, because it does capture more of what's happening on the market and the society.

  • > Capitalism as a whole is the least bad economic system for prosperity, but the least bad version of capitalism is something like the Nordic model, with good taxation and redistribution policies and consumer protections.

    Shouldn't we refer to the system by what it leads to in a majority of cases? Like Stafford Beer and the cybernetician's useful heuristic:

    POSIWID - the Purpose Of a System Is What It Does.

    I mean, the Nordic model is not predominant by any means, right? So why would we use the term capitalism to refer to that, or think capitalism generally leads to that?

    The thing we have in most places certainly seems to be dominated by monopoly players, with laws and regulation tending in most cases to protect that entrenched power and leaving the rest of the people mollycoddled and/or mistreated.

    Aside from that, I think your line of reasoning is factually backwards. The rights and protections that people won over the last few hundred years were ripped from the hands of the powerful forces of capital every time, and never given gladly. History shows clearly that these advances were won in spite of capitalism, not because of it - ironically, by the same "left" you seem to be deriding.

    This famous "capitalism as least bad system" argument, more broadly, of course, presumes we by definition can't do better in any possible future. This is taken for sophisticated wisdom nowadays, but is arguably just the standard modern cynical excuse not to even begin to think.

    "The Dawn of Everything" by Davids Graeber and Wengrow does an amazing job of showing this notion that humans are stuck in their economic systems to be a tired modern fantasy at best, driven by our lack of imagination and political sophistication when compared to our forebears.

The bottom line is that scaling requires money and the only way to get that in the private sector is to lure those with money with the temptation they can multiply their wealth.

Things could have been different in a world before financial engineers bankrupted the US (the crises of enron, salomon bros, 2008 mortgage debacle all added hundreds of billions to us debt as the govt bought the ‘too big to fail’ kool-aid and bailed out wall street by indenturing main street). Now 1/4 of our budget is simply interest payment on this debt. There is no room for govt spending on a moonshot like AI.

This environment in 1960 would have killed Kennedy’s inspirational moonshot of going to the moon while it was still an idea in his head in his post coital bliss with Marilyn at his side.

Today our govt needs money just like all the other scrooge-infected players in the tower of debt that capitalism has built.

Ironically it seems china has a better chance now. It seems its release of deep seek and the full set of parameters is giving it a veneer of altruistic benevolence that is slightly more believable than what we see here in the west. China may win simply on thermodynamic grounds. Training and research in DL consumes terawatt hours and hundreds of thousands of chips. Not only are the US models on older architectures (10-100x more energy inefficient) but the ‘competition’ of multiple players in the US multiplies the energy requirements.

Would govt oversight have been a good thing? Imagine if General Motors, westinghouse, bell labs, and ford competed in 1940 each with their own manhattan project to develop nuclear weapons ? Would the proliferation of nuclear have resulted in human extinction by now?

Will AI’s contribution to global warming be just as toxic global thermonuclear war?

Isn't this just shitty capitalism fighting shitty capitalism?

If I hire a bunch of super smart AI researchers out of college for a (to them) princely sum of $1M each, then I could go to a VC and have them invest $40m for and 1% stake.

Then since these people are smart and motivated, they build something nice, and are first to market with it.

If Google wants to catch up, they could either buy the company for $4B, or hire away the people who built the thing in a year, essentially for free (since the salaries have to be paid anyway, lets give them a nice 50% bonus).

They'd be behind half a year recreating their old work, but the unicorn startup market leader would be essentially crippled.

You might ask what about startup stock options, but those could easily end up being worthless, and for the researchers, would need years to be turned into money.

  • VCs aren't going to give $40m to a startup for a 1% stake

    • My numbers might not be accurate but the point stands, there's 'salary' kind of money, and 'startup valuation' money.

      Hiring away key researchers costs tens to hundreds of millions of dollars (an eye-watering never before seen amount of money before AI), but buying the startup costs billions.

      Then again, I'm merely a pundit when it comes to this, there's money (the cash Apple has locked in its vaults), and 'money' (Google executes a merger with a stock swap arrangement, essentially costing them nothing, and the stock jumps 5% at the announcement, even making them money)