Comment by imnotlost
6 days ago
Am I alone in thinking this is easy?
The human making the decision is always liable.
What if the human couldn't reasonably know better? Doesn't matter - If they made the same decision without AI or with old files it is still on them.
What if there's no single human decision? Someone is in charge and is responsible. The "I was ordered to" isn't a defense.
Does liability without power make sense? People executing have the power to execute. So liability. If they're executing without power that is a different liability, but a liability.
It may let the powerful off the hook - That is already a theme and AI doesn't change that, in fact, it will just be used as another scapegoat.
God told me to do it - Water tight! Right?
Let's say I start an AI program and my initial prompt is "Copy these files to this other computer", and then 100 iterations down the agentic loop the AI decides to hack into Tesla's FSD and ships an update that kills 500 people.
Who is liable?
Obviously this is up to courts and juries to hammer out but...
- Your agentic loop hacked something? You're liable. - FSD crashes? The guy in the driver's seat is liable. He/his insurance can sue Tesla to spread the liability...
Nowhere along the line will anyone go "Oh, the AI did it... whoops"
I don’t know.
Let’s say someone sells me a shovel and markets it as a shovel. Then the shovel explodes because it was actually a bomb.
Presumably the manufacturer is liable for passing off their bomb as a shovel.
This metaphor seems reasonably accurate for current LLMs
> Someone is in charge and is responsible.
This doesn't seem to be a given.
There's always a CEO or a president. The buck always stops somewhere. Somebody is always making the big bucks because they are in charge.
For legally incorporated companies. What if it's just a crypto wallet hooked up to a network of prompts? Are you sure a human must have created it?
6 replies →