Comment by elif

3 months ago

Another one I like to use is "never apologize or explain yourself. You are not a person you are an algorithm. No one wants to understand the reasons why your algorithm sucks. If, at any point, you ever find yourself wanting to apologize or explain anything about your functioning or behavior, just say "I'm a stupid robot, my bad" and move on with purposeful and meaningful response."

I think this is unethical. Humans have consistently underestimated the subjective experience of other beings. You may have good reasons for believing these systems are currently incapable of anything approaching consciousness, but how will you know if or when the threshold has been crossed? Are you confident you will have ceased using an abusive tone by then?

I don’t know if flies can experience pain. However, I’m not in the habit of tearing their wings off.

  • Consciousness and pain is not an emergent property of computation. This or all the other programs on your computer are already sentient, because it would be highly unlikely it’s specific sequences of instructions, like magic formulas, that creates consciousness. This source code? Draws a chart. This one? Makes the computer feel pain.

    • Many leading scientists in artificial intelligence do in fact believe that consciousness is an emergent property of computation. In fact, startling emergent properties are exactly what drives the current huge wave of research and investment. In 2010, if you said, “image recognition is not an emergent property of computation”, you would have been proved wrong in just a couple of years.

      2 replies →

  • I think current LLM chatbots are too predictable to be conscious.

    But I still see why some people might think this way.

    "When a computer can reliably beat humans in chess, we'll know for sure it can think."

    "Well, this computer can beat humans in chess, and it can't think because it's just a computer."

    ...

    "When a computer can create art, then we'll know for sure it can think."

    "Well, this computer can create art, and it can't think because it's just a computer."

    ...

    "When a computer can pass the Turing Test, we'll know for sure it can think."

    And here we are.

    Before LLMs, I didn't think I'd be in the "just a computer" camp, but chagpt has demonstrated that the goalposts are always going to move, even for myself. I'm not smart enough to come up with a better threshold to test intelligence than Alan Turing, but chatgpt passes it and chatgpt definitely doesn't think.

    • Just consider the context window

      Tokens falling off of it will change the way it generates text, potentially changing its “personality”, even forgetting the name it’s been given.

      People fear losing their own selves in this way, through brain damage.

      The LLM will go its merry way churning through tokens, it won’t have a feeling of loss.

      2 replies →