← Back to context

Comment by general_reveal

2 days ago

Did it ever occur to you that an entire generation of developers are going to retire in less than 20 years? They are betting that the software industry will be autonomous. Really, think of our industry like AUV phenomena. We’re the drivers that are about to be shown the door, that’s the bet.

World will still need software, lots of it. Their valuation is based on an entire developer-less future world (no labor costs).

Even the rise of high-level languages did not lead to a "developer-less future". What it did was improve productivity and make software cheaper by orders of magnitude; but compiler vendors did not benefit all that much from the shift.

OpenAI has all the name recognition (which is worth a couple billion in itself), but when it comes to actual business use cases in the here and now Anthropic seems ahead. Even more so if we are talking about software dev. But they are valued at less than half of OpenAI's valuation

What is somewhat justifying OpenAI's valuation is that they are still trying for AGI. They are not just working on models that work here and now, they are still approaching "simulating worlds" from all kinds of angles (vision, image generation, video generation, world generation), presumably in hopes that this will at some point coalesce in a model with much better understanding of our world and its agency in it. If this comes to pass OpenAI's value is near unlimited. If it doesn't, its value is at best half what it is today

  • > What is somewhat justifying OpenAI's valuation is that they are still trying for AGI.

    And that's the dealbreaker for me since they've been so adamant on scaling taking them there, while we're all seeing how it's been diminishing returns for a while.

    I was worried a few years back with the overwhelming buzz, but my 2017 blogpost is still holding strong. To be fair it did point to ASI where valuation is indeed unlimited, but nowadays the definition of AGI is quite weakened in comparison.. but does that then convey an unlimited valuation?

  • Obligatory reminder that today's so called "AGI" has trouble figuring out whether I should walk or drive to the car wash in order to get my dirty car washed. It has to think through the scenario step by step, whereas any human can instantly grok the right answer.

    • The idea/hope is that a video model would answer the car wash problem correctly. There are exactly the kinds of issues you have to solve to avoid teleporting objects around in a video, so whenever we manage more than a couple seconds of coherent video we will have something that understands the real world much better than text-based models. Then we "just" have to somehow make a combined model that has this kind of understanding and can write text and make tool calls

      Yes, this is kind of like Tesla promising full self driving in 2016

    • I just don't know how to engage with these criticisms anymore. Do you not see how increasingly convoluted the "simple question LLMs can't answer" bar has gotten since 2022? Do the human beings you know not have occasional brain farts where they recommend dumb things that don't make much sense?

      3 replies →

    • What are you talking about? OpenAI's ChatGPT free tier (that everyone uses) answers this in the first sentence within a couple seconds.

      "If your goal is to get your dirty car washed… you should probably drive it to the car wash "

      5 replies →

  • > What is somewhat justifying OpenAI's valuation is that they are still trying for AGI.

    "AGI" is the IPO.

  • > If this comes to pass OpenAI's value is near unlimited.

    How?

    If we have AGI, we have a scenario where human knowledge-based value creation as we know it is suddenly worthless. It's not a stretch to imagine that human labor-based value creation wouldn't be far behind. Altman himself has said that it would break capitalism.

    This isn't a value proposition for a business, it's an end of value proposition for society. The only people who find real value in that are people who spend far too much time online doing things like arguing about Roko's Basilisk - which is just Pascal's Wager with GPUs - and people who are so wealthy that they've been disconnected with real-world consequences.

    The only reason anyone sees value in this is because the second group of people think it'll serve their self-concept as the best and brightest humanity has ever had to offer. They're confusing ego with ability to create economic value.

    • "End of human-based value creation" is tantamount to post-scarcity. It "breaks" capitalism because it supposedly obviates the resource allocation problem that the free-market economy is the answer to. It's what Karl Marx actually pointed to as his utopian "fully realized communism". Most people would think of that as a pipe dream, but if you actually think it's viable, why wouldn't you want it?

It can both be true that

a) AI is going to replace a Bazillion-Dollar Industry and that

b) being an AI model provider does not allow to capture margins above 5% long-term

I am not saying that this is what will happen, but it's a plausible scenario. Without farmers we would all be dead but that does not mean the they capture monopoly rents on their assets.

Around 6 months ago, the company I work for bought Cursor subscriptions for everyone. I thought to myself, "this is it".

The majority of my coworkers now push AI-generated code each day, and it has completely absolved me of any fear whatsoever that AI will take my job.

But Anthropic is the one that is disrupting software development? So why are we not piling into that?