← Back to context

Comment by LeifCarrotson

4 hours ago

Is the headline actually surprising to anyone? AI products that are currently live on a half dozen cloud providers are fueling thousands of people's various delusions right now.

No, the LLM itself is not a human, but the people running the LLM are real people and are culpable for the totally foreseeable outcomes of the tool they're selling.

The vendors will argue that the benefits that some people are gaining from access to those tools outweigh the harms that some other people like Jonathan (and like Joel, his father) are suffering. A benefit of saving a few seconds on an email and a harm of losing a life due to suicide are not equivalent. And sure, the open models are out there, but most users aren't running them locally: they're going through the cloud providers.

Same human responsibility chain applies to self-driving cars, BTW. If a Waymo obstructs an ambulance [1] then Tekedra Mawakana, Dmitri Dolgov, and the rest of the team should be considered to have collectively obstructed that ambulance.

[1]: https://www.axios.com/local/austin/2026/03/02/waymo-vehicle-...