Comment by evilduck
1 year ago
Blindsight by Peter Watts also discusses what can be intelligent but not conscious. In the current hypefest of LLMs it’s interesting to consider that they may be similar.
1 year ago
Blindsight by Peter Watts also discusses what can be intelligent but not conscious. In the current hypefest of LLMs it’s interesting to consider that they may be similar.
I was thinking the same. if there's anything that is what it is like to be an LLM (and I'm not saying that there is - in fact, I doubt it, while supposing that it is a possibility for future machines) I suspect it would be like this, but more so, and inverted: while Keller had some experience of an external world but no experience of language, the entire universe for an LLM is language, without any obvious way to suppose that this language is about an external world.
I think that LLMs might go through the reverse journey, being fluent in tokens (words-ish) and working backwards towards the physical reality we all inhabit.
i think the "problem" here is that for all of human history we have always been able to use mastery of language as a signal for intelligence and competence, of which LLMs are neither. it's possible this is even instinctual it's so ingrained in our concepts of "other minds". so we're going to have to get used to the fact that just using language well isn't enough to prove intelligence, certainly not consciousness.
which then begs the the question, what is the magic ingredient, on top of use of language, that we have that bestows these qualities?
and also the observation that whatever this ingredient is, it must be very difficult to measure or prove which is maybe why we stuck with the crude, but easy to wield, "use-of-language" test for so long.
Available to read online, I read it last year: https://rifters.com/real/Blindsight.htm
"We do not like annoying cousins." Yes, exactly. The, uh, confident fluency of LLM responses, which can at the same time contradict what was said earlier, reminded me exactly of that. I don't know if you've ever met one of those glib psychopaths, but they have this characteristic of non-content communication, where it feels like words are being arranged for you, like someone composing a song using words from a language they do not know. See also: "you're talking a lot, but you're not saying anything."
Hm. The contradictions specifically are a thing I notice in humans that I think are entirely normal[0]. But the early LLMs with the shorter context windows, those reminded me of my mum's Alzheimer's.
That said, your analogy may well be perfect, as they are learning to people-please and to simulate things they (hopefully) don't actually experience.
(Not that it changes your point, but isn't that Machiavellian rather than psychopathic?)
[0] one of many reasons why I disagree with Wittgenstein about:
> If there were a verb meaning 'to believe falsely', it would not have any significant first person, present indicative.
Just because it's logically correct, doesn't mean humans think like that.
The part that really gets ME about that thought, is that those glib psychopaths/sociopaths fill an important role in human society, generally as leaders. I'm sure we can all think of some prominent political figures who are very good at arranging words to get their audience excited, but have a tenuous connection to fact (at best). Actually factual content seems almost irrelevant to their ability to lead, or to their followers' desire to follow.
If that's the function which we can now automate at scale, it's not the jobs the machines will ultimately take; it's the leadership.