Comment by jumpconc

20 hours ago

You haven't met certain humans. Not all humans have internal capacity for accountability.

The real meaning of accountability is that you can fire one if you don't like how they work. Good news! You can fire an AI too.

Bad news! They will not be aware that you have done this and will not care.

  • The purpose of firing a person shouldn't be vengeance but to remove someone who is unreliable or not cost effective.

    It's similarly reasonable to drop a tool that's unreliable, though I don't think that's a reasonable description here. Instead, they used a tool which is generally known to be unpredictable and failed to sandbox it adequately.

    • The purpose of firing a person is to remove someone unreliable, but also, the person having that skin in the game makes him behave more reliably. The latter is something you cannot do with an LLM.

      The cold hard fact is: LLMs are an unreliable tool, and using them without checking their every action is extremely foolish.

      6 replies →

But it's still a bit more difficult to sue them for leaking your company's data.

At least for now.