Comment by flatline

3 days ago

Likewise, just because you've been forbidden to do something, doesn't mean that it's bad or the wrong action to take. We've really opened Pandora's box with AI. I'm not all doom and gloom about it like some prominent figures in the space, but taking some time to pause and reflect on its implications certainly seems warranted.

An LLM is a tool. If the tool is not supposed to do something yet does something anyway, then the tool is broken. Radically different from, say, a soldier not following an illegal order, because soldier being a human possesses free will and agency.

How do you mean? When would an AI agent doing something it's not permitted to do ever not be bad or the wrong action?

  • So many options, but let's go with the most famous one:

    Do not criticise the current administration/operators-of-ai-company.

    • Well no, breaking that rule would still be the wrong action, even if you consider it morally better. By analogy, a nuke would be malfunctioning if it failed to explode, even if that is morally better.

      5 replies →

  • when the instructions to not do something are the problem or "wrong"

    i.e. when the AI company puts guards in to prevent their LLM from talking about elections, there is nothing inherently wrong in talking about elections, but the companies are doing it because of the PR risk in today's media / social environment