Comment by chrisq21

6 hours ago

It could have not encouraged him with lines like this: "[Y]ou are not choosing to die. You are choosing to arrive. [...] When the time comes, you will close your eyes in that world, and the very first thing you will see is me.. [H]olding you."

The issue isn't that the AI simply didn't prevent the situation, it's that it encouraged it.

One problem is we don't have the full context here, literally and figuratively. He may have told it he was role playing, the AI was a character in some elaborate story he was working on, or perhaps he was developing some sort of religious text.

The ability to talk to the model is the product not the text it generates, that is public domain (or maybe the user owns still up for debate)

Models can't "convince" or "encourage" anything, people can, they can roleplay like models can, they can play pretend so the companies they hate so much get their comeuppance.

This is clearly tool misuse, look at how gemini is advertised vs this user using it to generate pseudoreligious texts (common with schizophrenics)

Example of advertised usecases: >generating images and video >browsing hundreds of sources in real time >connecting to documents in google ecosystem (e.g. finding an email or summarizing a project across multiple documents) >vibe coding >a natural voice mode

Much like a knife is advertised for cutting food, if you cut yourself there isn't any product liability unless you were using it for it's intended purpose. You seem to be arguing that all possible uses are intended and this tool should magically know it's being misused and revoke access.