← Back to context

Comment by danny_codes

5 days ago

Let's say I start an AI program and my initial prompt is "Copy these files to this other computer", and then 100 iterations down the agentic loop the AI decides to hack into Tesla's FSD and ships an update that kills 500 people.

Who is liable?

Obviously this is up to courts and juries to hammer out but...

- Your agentic loop hacked something? You're liable. - FSD crashes? The guy in the driver's seat is liable. He/his insurance can sue Tesla to spread the liability...

Nowhere along the line will anyone go "Oh, the AI did it... whoops"

  • I don’t know.

    Let’s say someone sells me a shovel and markets it as a shovel. Then the shovel explodes because it was actually a bomb.

    Presumably the manufacturer is liable for passing off their bomb as a shovel.

    This metaphor seems reasonably accurate for current LLMs