Comment by skadamat

2 years ago

It's unfortunate that "AI" is still framed and discussed as some type of highly autonomous system that's separate from us.

Bad acting humans with AI systems are the threat, not the AI systems themselves. The discussion is still SO focused on the AI systems, not the actors and how we as societies align on what AI uses are okay and which ones aren't.

> Bad acting humans with AI systems are the threat, not the AI systems themselves.

I wish more people grasped this extremely important point. AI is a tool. There will be humans who misuse any tool. That doesn't mean we blame the tool. The problem to be solved here is not how to control AI, but how to minimize the damage that bad acting humans can do.

  • Right now, the "bad acting human" is, for example, Sam Altman, who frequently cries "Wolf!" about AI. He is trying to eliminate the competition, manipulate public opinion, and present himself as a good Samaritan. He is so successful in his endeavor, even without AI, that you must report to the US government about how you created and tested your model.

    • The greatest danger I see with super-intelligent AI is that it will be monopolized by small numbers of powerful people and used as a force multiplier to take over and manipulate the rest of the human race.

      This is exactly the scenario that is taking shape.

      A future where only a few big corporations are able to run large AIs is a future where those big corporations and the people who control them rule the world and everyone else must pay them rent in perpetuity for access to this technology.

      19 replies →

    • > He is trying to eliminate the competition,

      Funny way of doing it, going around saying "you should regulate us, but don't regulate people smaller than us, and don't regulate open-source".

      > you must report to the US government about how you created and tested your model.

      If you're referring to the recent executive order: only when dual-use, meaning the following:

      ---

      (k) The term “dual-use foundation model” means an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:

      (i) substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons;

      (ii) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or

      (iii) permitting the evasion of human control or oversight through means of deception or obfuscation.

      - https://www.whitehouse.gov/briefing-room/presidential-action...

    • The "bad acting human" are the assholes who uses "AI" to create fake imagery to push certain (and likely false) narratives on the various medias.

      Key thing here is that this is fundamentally no different from what has been happening since time immemorial, it's just that becomes easier with "AI" as part of the tooling.

      Every piece of bullshit starts from the "bad acting human". Every single one. "AI" is just another new part of the same old process.

      2 replies →

  • This is true, but skirts around a bit of the black box problem. It's hard to put guardrails on an amoral tool that makes it hard to fully understand the failure modes. And it doesn't even require "bad acting humans" to do damage; it can just be good-intending-but-naïve humans.

    • It's true that the more complex and capable the tool is, the harder it is to understand what it empowers the humans using it to do. I only wanted to emphasize that it's the humans that are the vital link, so to speak.

      9 replies →

  • Sure, today at least. But there is a future where the human has given AI control of things, with good intention, and the AI has become the threat.

    AI is a tool today, tomorrow AI is calling shots in many domains. It's worth planning for tomorrow.

    • A good analogy might be a shareholder corporation: each one began as a tool of human agency, and yet a sufficiently mature corporation has a de-facto agency of its own, transcending any one shareholder, employee, or board member.

      The more AI/ML is woven into our infrastructure and economy, the less it will be possible to find an "off switch", anymore than we can (realistically) find an off switch for Walmart, Amazon, etc.

      2 replies →

    • > there is a future where the human has given AI control of things, with good intention, and the AI has become the threat

      As in, for example, self-driving cars being given more autonomy than their reliability justifies? The answer to that is simple: don't do that. (I'm also not sure all such things are being done "with good intention".)

      2 replies →

    • I'd rather address our reality than plan for someone's preferred sci-fi story. We're utterly ignorant of tomorrow's tech. Let's solve what we know is happening before we go tilting at windmills.

    • WHY on earth would we let "AI systems" we don't understand control powerful things we care about. We should criticize the human, politician, or organization that enabled that

      2 replies →

  • A big problem with discourse on AI is people talking past each other because they're not being clear enough on their definitions.

    An AI doomer isn't talking about any current system, but hypothetical future ones which can do planning and have autonomous feedback loops. These are best thought of as agents rather than tools.

    • But how does this agent interact with the outside world? It's just a piece of silicon buzzing with electricity until it outputs a message that some OTHER system reads and interprets.

      Maybe that's a set of servos and robotic legs, or maybe it's a Bloomberg terminal and a bank account. You'll notice that all of these things are already regulated if they have enough power to cause damage. So at the end the GP is completely right; someone has to hook up the servos to the first LLM-based terminator.

      This whole thing is a huge non-issue. We already (strive to) regulate everything that can cause harm directly. This regulation reaches these fanciful autonomous AI agents as well. If someone bent upon destroying the world had enough resources to build an AI basilisk or whatever, they could have spent 1/10 the effort and just created a thermonuclear bomb.

      1 reply →

  • Of people understood this then they would have to live with the unsatisfying reality that not all violators can be punished. When you do it this way and paint the technology as potentially criminal that they can get revenge on corporations that which is what is mostly artist types want

  • If you apply this thinking to Nuclear weapons it becomes nonsensical, which tells us that a tool that can only be oriented to do harm will only be used to do harm. The question then is if LLMs or AI more broadly will even potentially help the general public and there is no reason to think so. The goal of these tools is to be able to continue running the economy while employing far fewer people. These tools are oriented by their very nature to replace human labor, which in the context of our economic system has a direct and unbreakable relationship to a reduction in the well being of the humans it replaces.

    • > a tool that can only be oriented to do harm

      Nuclear technology can be used for non-harmful things. Even nuclear bombs can be used for non-harmful things--see, for example, the Orion project.

      > These tools are oriented by their very nature to replace human labor

      So is a plow. So is a factory. So is a car. So is a computer. ("Computer" used to be a description of a job done by humans.) The whole point of technology is to reduce the amount of human drudge work that is required to create wealth.

      > in the context of our economic system has a direct and unbreakable relationship to a reduction in the well being of the humans it replaces

      All of the technologies I listed above increased the well being of humans, including those they replaced. If we're anxious that that might not happen under "our economic system", we need to look at what has changed from then to now.

      In a free market, the natural response to the emergence of a technology that reduces the need for human labor in a particular area is for humans to shift to other occupations. That is what happened in response to the emergence of all of the technologies I listed above.

      If that does not happen, it is because the market is not free, and the most likely reason for that is government regulation, and the most likely reason for the government regulation is regulatory capture, i.e., some rich people bought regulations that favored them from the government, in order to protect themselves from free market competition.

    • 1. You've fallen for the lump of labor fallacy. A 100x productivity boost ≠ 100x fewer jobs, anymore than a 100x boost = static jobs with 100x more projects. Reality is far more complicated, and viewing labor as some static lump, zero-sum game will lead you astray.

      2. Your outlook on the societal impact of technology is contradicted by reality. The historical result of better tech always meant increased jobs and well-being. Today is the best time in human history to be alive by virtually every metric.

      3. AI has been such a massive boon to humanity and your everyday existence for years that questioning its public utility is frankly bewildering.

      4 replies →

    • Nuclear weapons are a tool to keep peace via MAD (mutual assured destruction)

      It's most likely the main reason there's no direct world wars between super powers.

  • But usually there’s a one-way flow of intent from the human to the tool. With a lot of AI the feedback loop gets closed, and people are using it to help them make decisions, and might be taken far from the good outcome they were seeking.

    You can already see this today’s internet. I’m sure the pizzagate people genuinely believed they were doing a good thing.

    This isn’t the same as an amoral tool like a knife, where a human decides between cutting vegetables or stabbing people.

    • > With a lot of AI the feedback loop gets closed, and people are using it to help them make decisions, and might be taken far from the good outcome they were seeking.

      The answer to this is simple: don't use a tool you don't understand. You can't fix this problem by nerfing the tool. You have to fix it by holding humans responsible for how they use tools, so they have an incentive to use them properly, and to not use them if they can't meet that requirement.

I think this may be a little short sighted.

AI “systems” are provided some level of agency by their very nature. That is, for example, you cannot predict the outcomes of certain learning models.

We necessarily provide agency to AI because that’s the whole point! As we develop more advanced AI, it will have more agency. It is an extension of the just world fallacy, IMO, to say that AI is “just a tool” - we lend agency and allow the tool to train on real world (flawed) data.

Hallucinations are a great example of this in an LLM. We want the machine to have agency to cite its sources… but we also create potential for absolute nonsense citations, which can be harmful in and of themselves, though the human on the using side may have perfectly positive intent.

AI can become a highly autonomous system that's separate from us. Current technological limits make it currently a hard sell.

LLMs, viewed as general purpose simulators/predictors, don't necessarily have any agency or goals by themselves. There is nothing to say that they cannot be made to simulate an agent with its own goals, by humans - and possibly either by malice or by mistake. Model capabilities are the limiting factor right now, but with the rise of more capable uncensored models, it isn't difficult to imagine a model attaining some degree of autonomy, or at least doing a lot of damage before imploding in on itself.

>Bad acting humans with AI systems are the threat

Does this mean "humans with bad motives" or does it extend to "humans who deploy AI without an understanding of the risk"?

I would say the latter warrants a discussion on the AI systems, if they make it hard to understand the risk due to opaqueness.

> Bad acting humans with AI systems are the threat, not the AI systems themselves.

It's worth noting this is exactly the same argument used by pro-gun advocates as it pertains to gun rights. It's identical to: guns don't harm/kill people, people harm/kill people (the gun isn't doing anything until the bad actor aims and pulls the trigger; bad acting humans with guns are the real problem; etc).

It isn't an effective argument and is very widely mocked by the political left. I doubt it will work to shield the AI sector from aggressive regulation.

  • It is an effective argument though, and the left is widely mocked by the right for simultaneously believing that only government should have the necessary tools for violence, and also ACAB.

    Assuming ML systems are dangerous and powerful, would you rather they be restricted to a small group of power-holders who will definitely use them to your detriment/to control you (they already do) or democratize that power and take a chance that someone may use them against you?

    • Communists and anarchists understand that the working class needs to defend itself from both the capitalist state and from fascist paramilitaries, thus must be collectively armed.

      It’s only a kind of liberal (and thus right wing) that argues for gun control. Other kinds of liberals that call themselves “conservative” (also right wing) argue against it and for (worthless) individual gun rights.

  • By that logic:

    Are we going to ban and regulate Photoshop and GIMP because bad people use them to create false imagery for propaganda?

    Actually, back that up for a second.

    Are we going to ban and regulate computers (enterprise and personal) because bad people use them for bad things?

    Are we going to ban and regulate speech because bad people say bad things?

    Are we going to ban and regulate hands because bad people use them to do bad things?

    The buck always starts and stops at the person doing the act. A tool is just a tool, blaming the tool is nothing but an act of scapegoating.

  • This argument pertains to every tool: guns, kitchen knives, cars, the anarchist cookbook, etc. You aren't against the argument. You're against how it's used. (Hmm...)

    • Right, and most tools that can be used for harm are regulated. Cars, knives, and guns included.

It's not either/or. At some point AI is likely to become autonomous.

If it's been trained by bad actors, that's really not a good thing.

The disturbing thing to consider is that it might be bad acting AI with human systems. I can easily see a situation where a bad acting algorithm alone wouldn't have nearly so negative an effect, if it weren't tuned precisely and persuasively to get more humans to do the work of increasing the global suffering of others for temporary individual gain.

To be clear, I'm not sure LLMs and their near term derivatives are so incredibly clever, but I have confidence that many humans have a propensity for easily manipulated irrational destructive stupidity, if the algorithm feeds them what they want to hear.

It reminds me of dog breeds.

Some dogs get bad reputations, but humans are an intricate part of the picture. For example, German Shepherds are objectively dangerous, but have a good reputation because they are trained and cared for by responsible people such as for the police.