← Back to context

Comment by AnotherGoodName

7 months ago

I’m not even convinced the intent is there though. An ai parroting terminator 2 lines is just that. Obviously no one should hook the ai up to nuclear launch systems but that’s like saying no one should give a parrot a button to launch nukes. The parrot repeating curse words isn’t the problem here.

If I'm a guy working in a missile silo in North Dakota and I can buy a parrot for a couple hundred bucks that does all my paperwork for me, can crack funny jokes, and make me better at my job, I might be tempted to bring the parrot down into the tube with me. And then the parrot becomes a problem.

It's incumbent on us to create policies and procedures in place ahead of time now that we know these parrots are out there to prevent people from putting parrots where they shouldn't

  • This is why when I worked in a secure area (and not even a real SCIF) that something as simple as bringing in an electronic device would have gotten a non-trivial amount of punishment. Beginning with losing access to the area, potentially escalating to a loss of clearance and even jail time. I hope the silos and all related infrastructure have significantly better policies already in place.

    • On the one hand, what you say is correct.

      On the other, we don't just have Snowden and Manning circumventing systems for noble purposes, we also have people getting Stuxnet onto isolated networks, and other people leaking that virus off that supposedly isolated network, and Hillary Clinton famously had her own inappropriate email server.

      (Not on topic, but from the other side of the Atlantic, how on earth did the US go from "her emails/lock her up" being a rallying cry to electing the guy who stacked piles of classified documents in his bathroom?)

      3 replies →

  • What makes you think parrots are allowed anywhere near the tube? Or that a single guy has the power to push the button willy nilly

  • Indeed. And what is intent anyways?

    Would you be able to even tell the difference if you don't know who is the person and who is the ai?

    Most people do things they're parroting from their past. A lot of people don't even know why they do things, but somehow you know that a person has intent and an ai doesn't?

    I would posit that the only way you know is because of the labels assigned to the human and the computer, and not from their actions.

It doesn't matter whether intent is real. I also don't believe it has actual intent or consciousness. But the behavior is real, and that is all that matters.

What action(s) by the system could convince you that the intent is there?

  • It actually doesn't matter. AI in it's current form is capable of extremely unpredictable actions so i won't trust it in situations that require traditional predictable algorithms.

    The metrics here ensure that only AI that doesn't type "kill all humans" in the chat box is allowed to do such things. That's a silly metric and just ensures that the otherwise unpredictable AIs don't type bad stuff specifically into chatboxes. They'll still hit the wrong button from time to time in their current form but we'll at least ensure they don't type that they'll do that since that's the specific metric we're going for here.