Comment by pdonis

2 years ago

> Bad acting humans with AI systems are the threat, not the AI systems themselves.

I wish more people grasped this extremely important point. AI is a tool. There will be humans who misuse any tool. That doesn't mean we blame the tool. The problem to be solved here is not how to control AI, but how to minimize the damage that bad acting humans can do.

Right now, the "bad acting human" is, for example, Sam Altman, who frequently cries "Wolf!" about AI. He is trying to eliminate the competition, manipulate public opinion, and present himself as a good Samaritan. He is so successful in his endeavor, even without AI, that you must report to the US government about how you created and tested your model.

  • The greatest danger I see with super-intelligent AI is that it will be monopolized by small numbers of powerful people and used as a force multiplier to take over and manipulate the rest of the human race.

    This is exactly the scenario that is taking shape.

    A future where only a few big corporations are able to run large AIs is a future where those big corporations and the people who control them rule the world and everyone else must pay them rent in perpetuity for access to this technology.

    • Open source models do exist and will continue to do so.

      The biggest advantage ML gives is in lowering costs, which can then be used to lower prices and drive competitors out of business. The consumers get lower prices though, which is ultimately better and more efficient.

      11 replies →

    • > This is exactly the scenario that is taking shape.

      That's a pre-super-intelligent AI scenario.

      The super-intelligent AI scenario is when the AI becomes a player of its own, able to compete with all of us over how things are run, using its general intelligence as a force multiplier to... do whatever the fuck it wants, which is a problem for us, because there's approximately zero overlap between the set of things a super-intelligent AI may want, and us surviving and thriving.

      6 replies →

  • > He is trying to eliminate the competition,

    Funny way of doing it, going around saying "you should regulate us, but don't regulate people smaller than us, and don't regulate open-source".

    > you must report to the US government about how you created and tested your model.

    If you're referring to the recent executive order: only when dual-use, meaning the following:

    ---

    (k) The term “dual-use foundation model” means an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:

    (i) substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons;

    (ii) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or

    (iii) permitting the evasion of human control or oversight through means of deception or obfuscation.

    - https://www.whitehouse.gov/briefing-room/presidential-action...

  • The "bad acting human" are the assholes who uses "AI" to create fake imagery to push certain (and likely false) narratives on the various medias.

    Key thing here is that this is fundamentally no different from what has been happening since time immemorial, it's just that becomes easier with "AI" as part of the tooling.

    Every piece of bullshit starts from the "bad acting human". Every single one. "AI" is just another new part of the same old process.

    • so let me tie this with a controversial topic - gun control.

      If people agree that gun control would reduce the harm from guns, wouldn't this same logic apply to AI? Is it different?

      1 reply →

This is true, but skirts around a bit of the black box problem. It's hard to put guardrails on an amoral tool that makes it hard to fully understand the failure modes. And it doesn't even require "bad acting humans" to do damage; it can just be good-intending-but-naïve humans.

  • It's true that the more complex and capable the tool is, the harder it is to understand what it empowers the humans using it to do. I only wanted to emphasize that it's the humans that are the vital link, so to speak.

    • You're not wrong, but I think this quote partly misses the point:

      >The problem to be solved here is not how to control AI

      When we talk about mitigations, it is explicitly about how to control AI, sometimes irrespective of how someone uses it.

      Think about it this way: suppose I develop some stock-trading AI that has the ability to (inadvertently or purposefully) crash the stock market. Is the better control to put limits on the software itself so that it cannot crash the market or to put regulations in place to penalize people who use the software to crash the market? There is a hierarchy of controls when we talk about risk, and engineering controls (limiting the software) are always above administrative controls (limiting the humans using the software).

      (I realize it's not an either/or and both controls can - and probably should - be in place, but I described it as a dichotomy to illustrate the point)

      8 replies →

Sure, today at least. But there is a future where the human has given AI control of things, with good intention, and the AI has become the threat.

AI is a tool today, tomorrow AI is calling shots in many domains. It's worth planning for tomorrow.

  • A good analogy might be a shareholder corporation: each one began as a tool of human agency, and yet a sufficiently mature corporation has a de-facto agency of its own, transcending any one shareholder, employee, or board member.

    The more AI/ML is woven into our infrastructure and economy, the less it will be possible to find an "off switch", anymore than we can (realistically) find an off switch for Walmart, Amazon, etc.

    • > a sufficiently mature corporation has a de-facto agency of its own, transcending any one shareholder, employee, or board member.

      No, the corporation has an agency that is a tool of particular humans who are using it. Those humans could be shareholders, employees, or board members; but in any case they will have some claim to be acting for the corporation. But it's still human actions. Corporations can't do anything unless humans acting for them do it.

      1 reply →

  • It's still humans who make the decision to let “AI” call the shots.

    • Sure, but that's the gist of AI X-risk: this is one of those few truly irreversible decisions. We have one shot at it, and if we get it wrong, it's game over.

      Note that it may not be immediately apparent we got it wrong. Think of a turkey on a stereotypical small American farm. It will itself living a happy and safe life under protection of its loving Human, until one day, for some reason that's completely incomprehensible to the turkey, the loving Human comes and chops its head off.

  • > there is a future where the human has given AI control of things, with good intention, and the AI has become the threat

    As in, for example, self-driving cars being given more autonomy than their reliability justifies? The answer to that is simple: don't do that. (I'm also not sure all such things are being done "with good intention".)

    • > The answer to that is simple: don't do that.

      This is also the answer to over-eating, and to the dangers of sticking your hands in heavy machinery while it's running.

      And yet there's an obesity problem in many nations, and health-and-safety rules are written in blood.

      When you say up-thread is, in itself, correct:

      > I wish more people grasped this extremely important point. AI is a tool. There will be humans who misuse any tool. That doesn't mean we blame the tool. The problem to be solved here is not how to control AI, but how to minimize the damage that bad acting humans can do.

      Trouble is, we don't know how to do minimise the damage that bad acting humans can do with a tool that can do the thinking for them. Or even if we can. And that's assuming nobody is dumb enough to put the tool into a loop, give it some money, and leave it unsupervised.

    • Firstly, "don't do that" probably requires some "control" over AI in the respect of how it's used and rolled out. Secondly, I find it hard to believe that rolling out self driving cars was a play by bad actors, there was a perceived improvement to the driving experience in exchange for money, feels pretty straight forward to me. I'm not in disagreement that it was premature though.

  • I'd rather address our reality than plan for someone's preferred sci-fi story. We're utterly ignorant of tomorrow's tech. Let's solve what we know is happening before we go tilting at windmills.

  • WHY on earth would we let "AI systems" we don't understand control powerful things we care about. We should criticize the human, politician, or organization that enabled that

    • Why? Because the man-made horrors beyond mortal comprehension seem to bring in the money, so far. Because the society we're in is used to mere compensation and prison time being suitable results from poor decisions leading to automations exploding in people's faces (literally or metaphorically), not things that can eat everyone.

      And then there's the cases of hubris where people only imagine they understand the powerful thing, but they don't, like Chernobyl exploding and basically every time someone is hacked or defrauded.

A big problem with discourse on AI is people talking past each other because they're not being clear enough on their definitions.

An AI doomer isn't talking about any current system, but hypothetical future ones which can do planning and have autonomous feedback loops. These are best thought of as agents rather than tools.

  • But how does this agent interact with the outside world? It's just a piece of silicon buzzing with electricity until it outputs a message that some OTHER system reads and interprets.

    Maybe that's a set of servos and robotic legs, or maybe it's a Bloomberg terminal and a bank account. You'll notice that all of these things are already regulated if they have enough power to cause damage. So at the end the GP is completely right; someone has to hook up the servos to the first LLM-based terminator.

    This whole thing is a huge non-issue. We already (strive to) regulate everything that can cause harm directly. This regulation reaches these fanciful autonomous AI agents as well. If someone bent upon destroying the world had enough resources to build an AI basilisk or whatever, they could have spent 1/10 the effort and just created a thermonuclear bomb.

    • How does Hitler or Putin or Musk take control? How does a project director build a dam?

      Via people, sending messages to them, convincing them to do things. This can be with facts and logic or with rhetoric and emotional appeals or orders that seem to come from entities of importance or transfers of goods/services (money).

Of people understood this then they would have to live with the unsatisfying reality that not all violators can be punished. When you do it this way and paint the technology as potentially criminal that they can get revenge on corporations that which is what is mostly artist types want

If you apply this thinking to Nuclear weapons it becomes nonsensical, which tells us that a tool that can only be oriented to do harm will only be used to do harm. The question then is if LLMs or AI more broadly will even potentially help the general public and there is no reason to think so. The goal of these tools is to be able to continue running the economy while employing far fewer people. These tools are oriented by their very nature to replace human labor, which in the context of our economic system has a direct and unbreakable relationship to a reduction in the well being of the humans it replaces.

  • > a tool that can only be oriented to do harm

    Nuclear technology can be used for non-harmful things. Even nuclear bombs can be used for non-harmful things--see, for example, the Orion project.

    > These tools are oriented by their very nature to replace human labor

    So is a plow. So is a factory. So is a car. So is a computer. ("Computer" used to be a description of a job done by humans.) The whole point of technology is to reduce the amount of human drudge work that is required to create wealth.

    > in the context of our economic system has a direct and unbreakable relationship to a reduction in the well being of the humans it replaces

    All of the technologies I listed above increased the well being of humans, including those they replaced. If we're anxious that that might not happen under "our economic system", we need to look at what has changed from then to now.

    In a free market, the natural response to the emergence of a technology that reduces the need for human labor in a particular area is for humans to shift to other occupations. That is what happened in response to the emergence of all of the technologies I listed above.

    If that does not happen, it is because the market is not free, and the most likely reason for that is government regulation, and the most likely reason for the government regulation is regulatory capture, i.e., some rich people bought regulations that favored them from the government, in order to protect themselves from free market competition.

  • 1. You've fallen for the lump of labor fallacy. A 100x productivity boost ≠ 100x fewer jobs, anymore than a 100x boost = static jobs with 100x more projects. Reality is far more complicated, and viewing labor as some static lump, zero-sum game will lead you astray.

    2. Your outlook on the societal impact of technology is contradicted by reality. The historical result of better tech always meant increased jobs and well-being. Today is the best time in human history to be alive by virtually every metric.

    3. AI has been such a massive boon to humanity and your everyday existence for years that questioning its public utility is frankly bewildering.

    • 1. This gets trotted out constantly but this is not some known constant about how capitalist economies work. Just because we have more jobs now than we did pre-digital revolution does not mean all technologies have that effect on the jobs market (or even that the digital revolution had that effect). A tool that is aimed to entirely replace humans across many/most/all industries is quite different than previous technological advancements.

      2. This is outdated, life is NOT better now than at any other time. Life expectancy is going down in the US, there is vastly more economic inequality now than there was in the 60s, people broadly report much worse job satisfaction than they did in previous generations. The only metric you can really point to about now being better than the 90s is absolute poverty going down. Which is great, but those advancements are actually quite shallow on a per-person basis and are matched by declines in relative wealth for the middle 80% of people.

      3. ??? What kind of AI are you talking about? LLMs have only been interesting to the public for about a year now

      3 replies →

  • Nuclear weapons are a tool to keep peace via MAD (mutual assured destruction)

    It's most likely the main reason there's no direct world wars between super powers.

But usually there’s a one-way flow of intent from the human to the tool. With a lot of AI the feedback loop gets closed, and people are using it to help them make decisions, and might be taken far from the good outcome they were seeking.

You can already see this today’s internet. I’m sure the pizzagate people genuinely believed they were doing a good thing.

This isn’t the same as an amoral tool like a knife, where a human decides between cutting vegetables or stabbing people.

  • > With a lot of AI the feedback loop gets closed, and people are using it to help them make decisions, and might be taken far from the good outcome they were seeking.

    The answer to this is simple: don't use a tool you don't understand. You can't fix this problem by nerfing the tool. You have to fix it by holding humans responsible for how they use tools, so they have an incentive to use them properly, and to not use them if they can't meet that requirement.