← Back to context

Comment by Peritract

7 days ago

If I tell someone to kill someone else and they do, then I should be held responsible.

If I write instructions in a book that I give to someone telling them to kill someone else and they do, then I should be held responsible.

If I give someone a tool I made that I bill as more-than-PhD-level intelligence and it tells someone to kill someone else and they do, then I should be held responsible.

All of the above situations seem equivalent to me; I'm not the only person responsible in each case, but I gave them instructions and they followed them.

Man, this debate is both very interesting and highly confusing with different people oscillating between "It's just a tool" and "It's a human being+"

  • It is a tool, but it's a tool that is sold by OpenAI as providing a high degree of intelligence. That's an endorsement of what the tool outputs as advice, which is what makes them responsible.

    • > That's an endorsement of what the tool outputs as advice

      That's not even close to true!

      Even if you've been living under a rock for the last 5 years and didn't already know these models are not reliable, pretty much every provider has a disclaimer next to the chat box informing you of that fact.

      1 reply →