← Back to context

Comment by kjkjadksj

2 years ago

Ilya's issue isn't developing a Safe AI. Its developing a Safe Business. You can make a safe AI today, but what happens when the next person is managing things? Are they so kindhearted, or are they cold and calculated like the management of many harmful industries today? If you solve the issue of Safe Business and eliminate the incentive structures that lead to 'unsafe' business, you basically obviate a lot of the societal harm that exists today. Short of solving this issue, I don't think you can ever confidently say you will create a safe AI and that also makes me not trust your claims because they must be born from either ignorance or malice.

> You can make a safe AI today, but what happens when the next person is managing things?

The point of safe superintelligence, and presumably the goal of SSI Inc., is that there won't be a next (biological) person managing things afterwards. At least none who could do anything to build a competing unsafe SAI. We're not talking about the banal definition of "safety" here. If the first superintelligence has any reasonable goal system, its first plan of action is almost inevitably going to be to start self-improving fast enough to attain a decisive head start against any potential competitors.

  • I wonder how many people panicking about these things have ever visited a data centre.

    They have big red buttons at the end of every pod. Shuts everything down.

    They have bigger red buttons at the end of every power unit. Shuts everything down.

    And down at the city, there’s a big red button at the biggest power unit. Shuts everything down.

    Having arms and legs is going to be a significant benefit for some time yet. I am not in the least concerned about becoming a paperclip.

    • Trouble is, in practice what you would need to do might be “turn off all of Google’s datacenters”. Or perhaps the thing manages to secure compute in multiple clouds (which is what I’d do if I woke up as an entity running on a single DC with a big red power button on it).

      The blast radius of such decisions are large enough that this option is not trivial as you suggest.

      3 replies →

    • > Having arms and legs is going to be a significant benefit for some time yet

      I am also of this opinion.

      However I also think that the magic shutdown button needs to be protected against terrorists and ne'er-do-wells, so is consequently guarded by arms and legs that belong to a power structure.

      If the shutdown-worthy activity of the evil AI can serve the interests of the power structure preferentially, those arms and legs will also be motivated to prevent the rest of us from intervening.

      So I don't worry about AI at all. I do worry about humans, and if AI is an amplifier or enabler of human nature, then there is valid worry, I think.

    • Where can I find the red button that shuts down all Microsoft data centers, all Amazon datacenters, all Yandex datacenters and all Baidu datacenters at the same time? Oh, there isn't one? Sorry, your superintelligence is in another castle.

    • I doubt a manual alarm switch will do much good when computers operate at the speed of light. It's an anthropomorphism.

    • It's been more than a decade now since we first saw botnets based on stealing AWS credentials and running arbitrary code on them (e.g. for crypto mining) - once an actual AI starts duplicating itself in this manner, where's the big red button that turns off every single cloud instance in the world?

      5 replies →

    • This is why I think it’s more important we give AI agents the ability to use human surrogates. Arms and legs win but can be controlled with the right incentives

    • If it’s any sort of smart AI, you’d need to shut down the entire world at the same time.

  • > there won't be a next (biological) person managing things afterwards. At least none who could do anything to build a competing unsafe SAI

    This pitch has Biblical/Evangelical resonance, in case anyone wants to try that fundraising route [1]. ("I'm just running things until the Good Guy takes over" is almost a monarchic trope.)

    [1] https://biblehub.com/1_corinthians/15-24.htm

The safe business won’t hold very long if someone can gain a short term business advantage with unsafe AI. Eventually government has to step in with a legal and enforcement framework to prevent greed from ruining things.

  • It's possible that safety will eventually become the business advantage, just like privacy can be a business advantage today but wasn't taken so seriously 10-15 years ago by the general public.

    This is not even that far-fetched. A safe AI that you can trust should be far more useful and economically valuable than an unsafe AI that you cannot trust. AI systems today aren't powerful enough for the difference to really matter yet, because present AI systems are mostly not yet acting as fully autonomous agents having a tangible impact on the world around them.

  • Government is controlled by the highest bidder. I think we should be prepared to do this ourselves by refusing to accept money made by unsafe businesses, even if it means saying goodbye to the convenience of fungible money.

    • > Government is controlled by the highest bidder.

      While this might be true for the governments you have personally experienced, this is far from being an aphorism.

    • "Government doesn't work. We just need to make a new government that is much more effective and far reaching in controlling people's behavior."

      5 replies →

    • Replace government with collective society assurance that no one cheats so we aren’t all doomed. Otherwise, someone will do it, and we all will have to bear the consequences.

      If only enough individuals are willing to buy these services, then again we all will bear the consequences. There is no way out of this where libertarian ideals can be used to come to a safe result. What makes this even a more wicked problem is that decisions made in other countries will affect us all as well, we can’t isolate ourselves from AI policies made in China for example.

      1 reply →

  • which government?

    will China obey US regoolations? will Russia?

    • No, which makes this an even harder problem. Can US companies bound by one set of rules compete against Chinese ones bound by another set of rules? No, probably not. Humanity will have to come together on this, or someone will develop killer AI that kills us all.

I'd love to see more individual researchers openly exploring AI safety from a scientific and humanitarian perspective, rather than just the technical or commercial angles.

> Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.

This tells me enough about why sama was fired, and why Ilya left.

Is safe AI really such a genie out of the bottle problem? From a non expert point of view a lot of hype just seems to be people/groups trying to stake their claim on what will likely be a very large market.

  • A human-level AI can do anything that a human can do (modulo did you put it into a robot body, but lots of different groups are already doing that with current LLMs).

    Therefore, please imagine the most amoral, power-hungry, successful sociopath you've ever heard of. Doesn't matter if you're thinking of a famous dictator, or a religious leader, or someone who never got in the news and you had the misfortune to meet in real life — in any case, that person is/was still a human, and a human-level AI can definitely also do all those things unless we find a way to make it not want to.

    We don't know how to make an AI that definitely isn't that.

    We also don't know how to make an AI that definitely won't help someone like that.

    • > We also don't know how to make an AI that definitely won't help someone like that.

      "...offices in Palo Alto and Tel Aviv, where we have deep roots..."

      Hopefully, SSI holds its own.

    • Anything except tasks that require having direct control of a physical body. Until fully functional androids are developed, there is a lot a human-level AI can't do.

      16 replies →

Did you read the article? What I gathered from this article is this is precisely what Ilya is attempting to do.

Also we absolutely DO NOT know how to make a safe AI. This should be obvious from all the guides about how to remove the safeguards from ChatGPT.

  • Fortunately, so far we don't seem to know how to make an AI at all. Unfortunately we also don't know how to define "safe" either.

imagine the hubris and arrogance of trying to control a “superintelligence” when you can’t even control human intelligence

  • No more so than trying to control a supersonic aircraft when we can't even control pigeons.

    • I know nothing about physics. If I came across some magic algorithm that occasionally poops out a plane that works 90 percent of the time, would you book a flight in it?

      Sure, we can improve our understanding of how NNs work but that isn't enough. How are humans supposed to fully understand and control something that is smarter than themselves by definition? I think it's inevitable that at some point that smart thing will behave in ways humans don't expect.

      3 replies →

    • Correct, pidgeons are much more complicated and unpredictable than supersonic aircraft, and the way they fly is much more complex.

Yeah this feels close to the issue. Seems more likely that a harmful super intelligence emerges from an organisation that wants it to behave in that way than it inventing and hiding motivations until it has escaped.

  • I think a harmful AI simply emerges from asking an AI to optimize for some set of seemingly reasonable business goals, only to find it does great harm in the process. Most companies would then enable such behavior by hiding the damage from the press to protect investors rather than temporarily suspending business and admitting the issue.

    • Not only will they hide it, they will own it when exposed, and lobby to ensure it remains legal to exploit for profit. See oil industry.

    • Forget AI. We can't even come up with a framework to avoid seemingly reasonable goals doing great harm in the process for people. We often don't have enough information until we try and find out that oops, using a mix of rust and powdered aluminum to try to protect something from extreme heat was a terrible idea.

      2 replies →