Comment by fc417fc802

12 hours ago

That's certainly how it looks right now but where's the guarantee? What happens if it turns out that deep learning on its own can't achieve AGI but someone figures out a proprietary algorithm that can? That sort of thing. Metaphorically we're a bunch of tribesmen speculating about the future potential outcomes of the space race (ie the impacts, limits, and timeline of ASI).

Imagine such an AI exists. What good is AI that is so good that you cannot sell API access because it would help others to build equivalently powerful AI and compete with you?

If you gatekeep, you will not make back the money you invested. If you don't gatekeep, your competitors will use your model to build competing models.

I guess you can sell it to the Department of War.

  • > What good is AI that is so good that you cannot sell API access because it would help others to build equivalently powerful AI and compete with you?

    Its awesome and world dominating, you just don’t sell access to that AI, you instead directly, by yourself, dominate any field that better AI provides a competitive advantage in as soon as you can afford to invest the capital to otherwise operate in that field, and you start with the fields where the lowest investment outside of your unmatchable AI provides the highest returns and, and plow the growing proceeds into investing in successive fields.

    Obviously, it is even more awesome if you are a gigantic company with enormous cash to to throw around to start with when you develop the AI in question, since that lets you get the expanding domination operation going much quicker.

    • To dominate the real world, you need correcting feedback loop from reality. These feedback loops and regulations (in medical and other industries) take long time to come back with good signals. So you are still time bound by how fast your experiments are.

    • It's not clear to me that one horse-sized AI allows you to outcompete 100 duck-sized AIs in use by everyone else once you factor in the non-intelligence contributions that the others with weaker AIs bring to the table.

      There's a lot more to building a successful product than how smart your engineers/agents are, how many engineers/agents you have, and capital.

      Google, for example, can be extremely dysfunctional at launching new products despite unimaginably vast resources. They often lack intangible elements to success, such as empathizing with your customers' needs.

      If we were in a world where AI was not already widespread, then I would agree that having strong AI would be an immense competitive advantage. However, in a world where "good enough" AI is increasingly widespread, the competitive advantage of strong AI diminishes as time goes on.

    • Yup. That doesn't really take a full-blown AGI on the path to ASI on the path to godhood - it'll take a bit better and more reliable LLM with a decent harness.

      That's why I've been saying that the entire software industry is now living on borrowed time. It'll continue at the mercy of SOTA LLM operators, for as long as they prefer to extract rent from everyone for access to "cognition as a service". In the meantime, as the models (and harnesses) get better, the number of fields SOTA model owners could dominate overnight, continues to grow.

      (One possible trigger would be the open models. As long as the gap between SOTA and open is constant or decreasing, there will be a point where SOTA operators might be forced to cannibalize the software industry by a third party with an open model and access to infra pulling the trigger first.)

      4 replies →

  • > Imagine such an AI exists. What good is AI that is so good that you cannot sell API access because it would help others to build equivalently powerful AI and compete with you?

    At this point, if you can no longer safely drip-feed industry the access to "thinking as a service" and rake in rent, you start using it, displacing existing players in segment after segment until you kill the entire software industry.

    That's pre-ASI and entirely distinct from the AI itself becoming so good it takes over.

  • If you assume the status quo - a powerful not quite human level AI - then you are most likely correct. However one of the primary winner takes all hypotheticals (and to be sure it remains nothing more than a wild hypothetical at this point) is achieving and managing to control proprietary ASI. Approximately, constructing something that vaguely resembles a god.

    Being unfathomably smarter than the people making use of it you could simply instruct it not to reveal information that would enable a potential competitor to construct an equivalent. No need to worry about competition; you can quite literally take over the world at that point.

    Not that I think it's likely such a system will so easily come to pass, nor that I think humanity could manage to maintain control over such a system for long. But we're talking about investments to hedge against existential tail risks here so "within the realm of plausibility" is sufficient.