← Back to context

Comment by bumby

2 years ago

This is true, but skirts around a bit of the black box problem. It's hard to put guardrails on an amoral tool that makes it hard to fully understand the failure modes. And it doesn't even require "bad acting humans" to do damage; it can just be good-intending-but-naïve humans.

It's true that the more complex and capable the tool is, the harder it is to understand what it empowers the humans using it to do. I only wanted to emphasize that it's the humans that are the vital link, so to speak.

  • You're not wrong, but I think this quote partly misses the point:

    >The problem to be solved here is not how to control AI

    When we talk about mitigations, it is explicitly about how to control AI, sometimes irrespective of how someone uses it.

    Think about it this way: suppose I develop some stock-trading AI that has the ability to (inadvertently or purposefully) crash the stock market. Is the better control to put limits on the software itself so that it cannot crash the market or to put regulations in place to penalize people who use the software to crash the market? There is a hierarchy of controls when we talk about risk, and engineering controls (limiting the software) are always above administrative controls (limiting the humans using the software).

    (I realize it's not an either/or and both controls can - and probably should - be in place, but I described it as a dichotomy to illustrate the point)

    • My first thought is that the problem is with the stock market. The stock market "API" should not allow human or machines to be able to "damage" our economy.

      7 replies →