← Back to context

Comment by adamtaylor_13

6 hours ago

Sounds like a death knell to me.

If I recall correctly, Ed Zitron noted in a recent article that one of the horsemen of his AI-pocalypse would be price hikes from providers.

Every time an Ed Zitron article is posted on HN, it is met with a torrent of vitriol and personal attacks. The articles are okay if not overly wordy but I don’t see how the subject matter elicits that strong of a response.

At any rate, this observation is not unique to Ed, lots of people have made the same conclusion that the math doesn’t add up from a business profitability perspective.

  • > The articles are okay if not overly wordy

    Did you mean instead "The articles are okay if overly wordy"?

  • Yeah I've noticed this phenomenon as well. Frankly, I expected to be downvoted into oblivion just for mentioning him. But Zitron's commentary on the financial implications of AI usually reads as common sense to me and checks out (granted I'm not really capable of refuting his points.)

  • > The articles are okay if not overly wordy but I don’t see how the subject matter elicits that strong of a response.

    Hot take, but really it's more of an observation than a take: We saw this exact response in Blockchain & crypto circles a few years ago. (Though HN wasn't quite as culturally "central" to those)

    Economic Bubbles are subject to the Tinkerbell Effect. They exist so long as people exist in them, and collapse when either 1) They become so financially unsustainable as to collapse, having consumed all the money the economy could possibly give them, or 2) People stop believing in the bubble and stop feeding it money.

    In this regard, the statement "NTFs are stupid" was not merely ridiculing those who bought them, but a direct attack on the bubble and those invested in it. And this is something the people involved in the bubble understand instinctively, even if they aren't consciously aware of it. (There's a psychological mechanism to that, but it's not relevant)

    So consequently, they react aggressively to dissent. They seek to enforce their narrative, because not doing so is a threat to the bubble and their financial interests.

    ---

    AI's not much different to that. It's clearly a bubble to everyone including the AI execs saying it out loud.

    And people react aggressively to dissent like Ed's, because if the wider public stops believing in AI's future, the bubble bursts. They'll stop tolerating datacenter construction, they'll sell their Nvidia shares, they'll demand regulators restrict AI.

    (And to those who can feel their aggression rising reading this comment. Hi, yes. I see you. If I were wrong, nothing I said would matter. You'd be wasting your time engaging with it, history would simply prove me wrong. But by all means, type up that reply or click that button.)

  • > Every time an Ed Zitron article is posted on HN, it is met with a torrent of vitriol and personal attacks.

    It's why I started to pay attention to what he says.

    Dude is a bit verbose, but his rationale is solid. If it gets the panties of some people here in a bunch, he may be on to something.

  • I agree with Ed Zitron more often than not, but I do think he Flanderized himself into being the aggressively-anti-AI-guy to the point where he now makes claims about the capabilities of "AI" that are incorrect regularly. I see people on HN doing the same thing, making claims about capabilities that were true as recently as 6 months ago, but aren't true anymore.

    [I'm an AI-doomer myself, but I am an AI-doomer because by and large this stuff increasingly works, not because it doesn't.]

    That said, Ed Zitron still does a lot of useful research into the economics of the industry and I also believe that continued progress in AI can disrupt the world (for better and for worse) while the economics propping up all the frontier model providers can also implode spectacularly.

    Some people talk about how AI doom comes about either way because it could take all of our jobs OR crash the economy when the current bubble bursts. But as an uber-AI-doomer I happen to think there is a very real possibility of a double downside (for the labor class, at least) where both of those things can happen at the same time!

That guy has his own form of AI psychosis

  • I'd say he's allowed his "mostly correct" opinion on the financial situation to color his "mostly incorrect" opinion on actual AI usefulness.

    I wouldn't call it psychosis though. He's committing a natural fallacy where expertise in one area doesn't lend itself to expertise in another.

Literally every VC funded consumer product has switched from a "growth at all costs" phase to a "Now we hike prices, make money, and generally enshittify" phase, and tons of those companies are still around (e.g. Uber), so I'm not sure why anyone thinks it would be much different for AI.

  • Those companies at least had somewhat of a moat.

    As I see it, the only thing close to a moat is CC for Anthropic, and since it is a big ol' fucking mess that is a) apparently now beyond the ability of any current SOTA LLM to fix, and b) understood by absolutely no human, I'd say it's not much of a moat. The other agents will catch up sooner rather than later.

    The other providers? I don't see a moat. We jump ship at the drop of a hat.

  • yes, but how many succeed without any kind of moat or having destroyed the existing companies?

    I'm still running local LLMs and finding perfectly acceptable code gen.