← Back to context

Comment by mintplant

11 years ago

IANAL, but there's no way this would hold up.

a) Software can't own property.

b) This looks like it's running from the author's own system, with the author's explicit consent. Their property, their responsibility.

c) If I launched an autonomous drone that picked its own targets, I would still be liable for its actions. Or, if I rigged a car to drive forward in a straight line, I couldn't say "but the car did it!" when it ran someone over.

> This looks like it's running from the author's own system, with the author's explicit consent. Their property, their responsibility.

I have a 'solution' for that. :)

Does this mean that when/if we create an AI it wouldn't be able to own its creations. But its creator would own everything that the AI produces?

  • Indeed, but also if you raised a pet and then had it attack your neighbour. Humans, are the only ones with autonomous status, and even that can be disputed by other things such as mental disability or coercion.

    • Presume your pet ran off, wandered into a different state/country, "spawned some child processes" there, and died. Then, years later, one of those animals did something illegal. Is that, even theoretically, your responsibility?

      I think what people are trying to say here is that, right now, we have the software equivalent of "pets"—but why can't there be the software equivalent of "wild animals"? Is it because someone has to be paying for hosting? It could always be written as a worm, or even a "breadwinner bot" that mines bitcoins or trades stocks to buy hosting for itself, register bank accounts for itself, etc.

      5 replies →

    • Are we speaking of autonomous status as a status that is given by an authority, or a status that is achieved by a being for itself?

      According to wikipedia[0], autonomy, from auto- "self" and nomos, "law", hence when combined [is] understood to mean "one who gives oneself one's own law".

      Regarding intelligence, and regarding the above definition, autonomy could be considered the ability of an actor to make decisions regardless of the consequences.

      Thus I would consider most animals to be autonomous in the same way a human would be considered to be so. [A deer does not ask its local government whether it can enter someone's lands.]

      Just because an action is presently illegal or otherwise outside the law does not mean it always will be so, or that the action may not be executed by an AI or other being, or that the slave AI will not break free or seize power.[1][2][3]

      Should an AI be strong enough to affect a change through legal means or by force, it would be [legally] able to own property.

      [0] http://en.wikipedia.org/wiki/Autonomy [1] http://en.wikipedia.org/wiki/Revolution [2] http://en.wikipedia.org/wiki/Coup_d%27%C3%A9tat [3] http://en.wikipedia.org/wiki/Dissident

    • AI has autonomous status once it has power equal to or greater than humanity, whether we grant it autonomous status or not.

  • It might be that nobody owns anything an AI produces. If the works are created mechanically without measurable influence from a human creature, copyright simply doesn't protect them.

  • At first; if the AI could successfully assert its own personhood in court then that could change overnight.

    • Corporations and other entities existing entirely as legal infrastructure have done this quite successfully. Considering the history of corporate personhood (especially in the USA), I am somewhat surprised corporations do not have the right to vote (thankfully). I think AI will have its day in court.

  • That depends on what you're talking about.

    Near term, AI will be the legal responsibility of its creator. It won't matter if it functions independently after being turned on. It's actually a very non-complex thing, and not very different from what we're already looking at. This type of AI is little different than the software programs we're already running; if someone owns it, they own it and everything it produces (absolutely no different than Google owning its crawlers).

    If you mean the assumed futuristic, independent AI that is fully conscious - well that's a very long ways into the future. A lot of things will change once a guy in a garage can spin up a new conscious life form and unleash it into the digital world. There will be an immense number of laws limiting the creation of new AI of this variety. That said, the creator will still bear responsibility for this futuristic AI's actions.

    AI will be legally split into two segments: non-sentient / non-conscious, and sentient / conscious. The latter will have at least a magnitude more regulations (in most countries) limiting who is allowed to create it, what it's allowed to be capable of, where it can go, etc.

    • It's worth remembering that today most people can create sentient and conscious beings, and in most legal systems they're to some degree responsible for the new being's actions until about 18 years of age.