Comment by jwpapi

14 hours ago

I don’t see any other outcome anymore to be honest, after seeing how humans use AI and how AI works and how providers tune their models.

To me it’s given:

- AI in it’s current state is ruthless in achieving its goal

- Providers tune ruthlessness to get stronger AIs versus the competitor

- Humans can’t evaluate all consequences of the seeds they’ve planted.

Collateral and reckless damage is guaranteed at this point.

Combined with now giving some AIs the ability to kill humans, this is gonna be interesting..

We could stop it, but we wont

>AI in it’s current state is ruthless in achieving its goal

I don't believe this to be a trait of any AI model, the model just does the right thing or the wrong thing.

The ruthless maximising of a particular trait is something that happens during training.

It does not follow that a model that is trained to reason will nedsesarily implement this ruthless seeking behaviour itself.

  • No lineage of AI models will be created that cannot achieve goals, they will be outcompeted by models that can.

>We could stop it

I strongly disagree. It's easy to utter this string of words, but it's meaningless. It's akin to saying if you have two hands you can perform brain surgery. Technically you can, practically you cannot, as there's other things required for pulling that off, not just having two working hands.

I doubt "stopping it" is up to anyone, it's rather a phenomenon and it's quite clear we're all going to wing it. It's a literal fight for power, nobody stops anything of this nature, as any authority that could stop it will choose to accelerate it, just to guarantee its power.

It is not AI we should fear, it's humans controlling and using it. But everyone who has a shot at it is promising they'll use it for "ultimate good" and "world peace" something something, obviously.

  • Yes, it would be like trying to “stop” gunpowder in 1400 or atomic weapons in 1938. Pandora’s box is open.

    • Gunpowder (weapons) and atomic tech (energy, material, weapons) are heavily regulated in most of the planet, as the risks of having free access to them for everyone (company/person) for their own selfish purpose without strong guardrails clearly outweighs the benefits.

      The fact that something exists doesn't mean that having it readily available is the only option, particularly if it has potentially disastrous consequences at scale. We are choosing to make it available to everyone fully unregulated, and that is a choice that will prove either beneficial or detrimental to society at some point.

      I don't think it is inevitable, I think it is a conscious choice made by a few that have their own and only their own interests in mind.

      As a technologist, I am amazed at this tech and see some personal benefits. As a human, I am terrified of the potential net negative effects, and I am having trouble reconciling those two feelings.

      3 replies →

Why does it have to be doom and gloom. Serious question. When we plant seeds they bear fruit and not all fruit is poison.

  • It's doom and gloom because the underlying game theory forces all state actors into an unbound and irresponsible arms race, consequences be damned.

    AI development game theory is extremely similar to the game theory behind nuclear arms development, but worse (nuclear weaponry was born from Human General Intelligence, and is therefore a subset of the potential of AI development). Failing to be the most capable actor could put one in a position of permanent loss of autonomy/agency at the whims of more capable actors.

  • Not OP, but AI is fundamentally in another category than any other technology before it. It requires moral fortitude to wield in a way that guns and books didn't require. It augments human judgement in a way that needs a moral framework to clearly guide it.

    Unfortunately, as a species we seem to be abandoning morality as a general principle. Everything is guided by cold hard rationality rather than something greater than us.

  • Because it's a fruit governed by humans, in the scope of a capitalistic and patriarchal society. And all fruits planted in a capitalistic and patriarchal society are poison

  • The current fruit is automating away a ton of human labor with no foreseeable way to continue to engage that labor. It is poison for the majority of humanity which will bear fruit for the limited few who can use it / own it.

    I think that much is fairly clear from AI.

> Collateral and reckless damage is guaranteed at this point.

It's industrialization and mechanized warfare all over again

AI isn't ruthless, that doesn't even make sense. It's a mathematical model, if it's optimizing for the wrong thing then that's strictly the fault of the people who chose what to optimize for

  • You need to go back and research AI safety long before LLMs were a thing. Any complex goal driven system will have outcomes that cannot be predicted. Saying "it's a mathematical model" belies your ignorance of behavior in complex systems. Very tiny changes in initial conditions can have vastly different outcome in results and you don't have enough entropy in the visible universe to test them all.

  • there might be better words to describe that it doesn’t really has the same boundaries we assume it has.

I love how sci-fi warned us against hyper-competent galaxy brain conscious AI but we are actually going to be killed by confidently wrong stochastic parrots.