← Back to context

Comment by engeljohnb

3 months ago

I think current LLM chatbots are too predictable to be conscious.

But I still see why some people might think this way.

"When a computer can reliably beat humans in chess, we'll know for sure it can think."

"Well, this computer can beat humans in chess, and it can't think because it's just a computer."

...

"When a computer can create art, then we'll know for sure it can think."

"Well, this computer can create art, and it can't think because it's just a computer."

...

"When a computer can pass the Turing Test, we'll know for sure it can think."

And here we are.

Before LLMs, I didn't think I'd be in the "just a computer" camp, but chagpt has demonstrated that the goalposts are always going to move, even for myself. I'm not smart enough to come up with a better threshold to test intelligence than Alan Turing, but chatgpt passes it and chatgpt definitely doesn't think.

Just consider the context window

Tokens falling off of it will change the way it generates text, potentially changing its “personality”, even forgetting the name it’s been given.

People fear losing their own selves in this way, through brain damage.

The LLM will go its merry way churning through tokens, it won’t have a feeling of loss.

  • That's an interesting point, but do you think you're implying that people who are content even if they have alzheimers or a damaged hippocampus aren't technically intelligent?

    • I don’t think it’s unfair to say that catastrophic conditions like those make you _less_ intelligent, they’re feared and loathed for good reasons.

      I also don’t think all that many people would be seriously content to lose their minds and selves this way, but everyone is able to fear it prior to it happening, even if they lose the ability to dread it or choose to believe this is not a big deal.