← Back to context

Comment by mmaunder

9 hours ago

I’m fascinated at how out of touch that NYT article is. It’s as if it was written by someone who just spent 3 years on the moon. “The next big thing will be agents: The models will fill digital shopping baskets and take care of online bills. They will act for you.”

Golly Mr two times Pulitzer nominee, do tell us more!

There is a kind of liberal arts elite that seems to not be using AI very much and not be buying any of their services. Contrast that with those of us in tech who are handing over money as fast as we can and can’t get enough of gpt 5.2 codex on xhigh and similar products that are game changing enablers.

Makes me wonder if we’re seeing a fracture in society beginning to form where the doomsdayers, naysayers, cynics and skeptics will realize their error too late.

My view on AI is that this is the world’s first Unbubble: where the majority view is that we are over invested, but where history will show we actually underestimated future revenue and profitability.

The conditions for the Unbubble are perfect. We have a once in a species level innovation with an economic system where all value accrues to the creators and financiers. And we have just emerged from the housing bubble and the dot com bubble in the last 3 decades, freshly scorched.

We thought connecting everyone would create new value far faster than it did. But really it took a long time to run all that fiber and make it fast, and it was just laying the plumbing for this moment.

Training big foundational models may seem slow, but it’s happening way faster than circling continents and the globe with fiber and developing terabit switching fabric.

I spend 12 hours a day using codex CLI to write extremely fast Rust and cuda code with advanced math that does things I didn’t think were possible. My focus is on creating value from the second and third order effects of AI. These enabling effects are in few conversations. As weirdly innovative products emerge from small shops, they will begin to be discussed.

Now you can pay them twice: once to access the tool to do your job and once to access the market for customers.

Definitely no reason to worry about an entrenched oligopoly there.

I have no idea why so many people think that an argument that AI works is the same thing as an argument that AI will be profitable.

The better AI gets, the better the training techniques get, and the better the algorithms get will result in fewer processors needed to run something of use. All of the advances will end up in the public domain if not immediately after or before they are even implemented, soon after. There will not be many economies of scale between having 100M customers or 10K customers, so no way to keep out competitors. They will all compete on price. If the models get really, really good, a "good enough" model will end up running on your old laptop and you won't have to pay for anything.

Saying that AI will be productive - which is yet to be seen, I don't know how much polishing or complete rethinking your code will have to go through before it can ship as an actual product that you have to stand behind and support - is not the same as saying that AI will be profitable.

We actually don't even need that many computer programs. Hypothetically, a ton of excess LLM coding supply might allow us to take out a few layers of expensive abstraction from our current stacks, and make more code even less necessary. They kept telling us that all of that abstraction was a result of trying to save developer labor costs, right? If AI is productive and rentiers can't manage to extract that productivity due to competition, that equation changes.

In the end, we say that the dot com bubble resulted in a huge amount of productive capacity that we were later able to put to use. But that doesn't mean that putting a quarter of a billion 90s dollars into DrKoop.com was a good idea; nope, still dumb.

  • > I have no idea why so many people think that an argument that AI works is the same thing as an argument that AI will be profitable.

    The fact that it works well for expensive categories of output (like software engineering, legal strategy, etc...) makes it difficult to imagine that it won't be profitable. You could still make an argument that the investments being made today are disproportionate, or that intense competition will stifle margins, but it's creating enough value to capture plenty of money.

  • funny i keep seeing AI is not profitable simultaneously because it is too expensive to run and that it is too cheap to run.