Comment by randallsquared
13 hours ago
> It only operates algorithmically on the input, which is distinctly not what people do when they read something.
That's not at all clear!
> Language, when conveyed between conscious individuals creates a shared model of the world. This can lead to visualizations, associations, emotions, creation of new memories because the meaning is shared. This does not happen with mere syntactic manipulation. That was Searle's argument.
All of that is called into question with some LLM output. It's hard to understand how some of that could be produced without some emergency model of the world.
In the thought experiment as constructed it is abundantly clear. It's the point.
LLM output doesn't call that into question at all. Token production through distance function in high-dimensional vector representation space of language tokens gets you a long way. It doesn't get you understanding.
I'll take Penrose's notions that consciousness is not computation any day.
Out of interest, what do you think it would look like if communicating was algorithmic?
I know that it doesn't feel like I am doing anything particularly algorithmic when I communicate but I am not the hommunculus inside me shuffling papers around so how would I know?
I think it would end inspiration.
4 replies →
I should have snipped the "it operates" part to communicate better. I meant that it's not at all clear that people are doing something non-algorithmic.