← Back to context

Comment by probably_wrong

6 days ago

> Incendiary and false headline aside

The text of the bill literally starts with "Creates the A.I. Safety Act. Provides that a developer of a frontier AI model shall not be held liable for critical harms caused by the frontier model if (conditions)", and defines "critical harms" as "death or serious injury of 100 or more people or at least $1,000,000,000 of damages". The headline is, IMO, shockingly accurate.

> Is Toyota liable for selling someone a car that is later used for vehicular manslaughter?

No, but they are liable for selling a car with defective brakes, even if they don't know that the brakes are defective. And if the ex-Monsanto has to pay millions in compensation for causing cancer with a product that they tested to hell and back, then I don't see how that's different when the one causing cancer is an AI just because the developers pinky swear that it's safe.

The headline is completely false and misleading. The bill does not indemnify AI companies from all mass murder as it implies. It indemnifies them if they UNKNOWINGLY provide a product that is used by others for mass murder.

If someone asks ChatGPT for places where a lot of people will be around in a city, intending to mass murder but not revealing as such, you want them to be liable? Seems absolutely crazy.