← Back to context

Comment by brookst

2 years ago

The biggest thing I’ve learned from chatGPT is that real people struggle with the difference between intelligence, understanding, and consciousness / sentience.

Because they are all ill defined in the manner they are used in common language. Hell, we have trouble describing what they are, especially in a scientific fact based setting.

Before this point in history we accepted 'I am that I am' because there wasn't any challenger to the title. Now that we are putting this to question we realize our definitions may not work well.

>The biggest thing I’ve learned from chatGPT is that real people struggle with the difference between intelligence, understanding, and consciousness / sentience.

Well, I'm no fan of chatGPT. But it appears most people are worse than chatGPT, because just regurgitate what they hear with no thought or contemplation. So you can't really blame average folks who struggle with the concepts of intelligence/understanding that you mention.

Which should be no surprise, as people have been grappling with these ideas for centuries, and we still don't have any definitive idea of what consciousness/sentience truly is. What I find interesting is that at one point the Turing test seemed to be the gold standard for intelligence, but chatGPT could pass that with flying colors. So how exactly will we know if/when true intelligence does emerge?

  • Well, my point wasn’t that there is a good definition of consciousness.

    My point was that “consciousness” and “intelligence” are very different things. One does not imply the other.

    Consciousness is about self reflection. Intelligence is about insight and/or problem solving. The two are often correlated, especially in animals, especially in humans, but they’re not the same thing at all.

    “Is chatgpt consciousness” is a totally different question than “is chatgpt intelligent”.

    We will know chatgpt is intelligent when it passes our tests of intelligence, which are imperfect but at least directionally correct.

    I have no idea if/when we we know whether chatgpt is conscious, because we don’t really have good definitions of consciousness, let along tests, as you note.

The most annoying thing to me is people thinking AI wants things and gets happy and sad. It doesn't have a mamailian or reptilian brain. It just holds a mirror up to humanity generally via matrix math and probability.

  • Well said. It is a mistake to anthropomorphize large language models; they really hate that.