Comment by slowmovintarget

15 hours ago

In the thought experiment as constructed it is abundantly clear. It's the point.

LLM output doesn't call that into question at all. Token production through distance function in high-dimensional vector representation space of language tokens gets you a long way. It doesn't get you understanding.

I'll take Penrose's notions that consciousness is not computation any day.

Out of interest, what do you think it would look like if communicating was algorithmic?

I know that it doesn't feel like I am doing anything particularly algorithmic when I communicate but I am not the hommunculus inside me shuffling papers around so how would I know?

I should have snipped the "it operates" part to communicate better. I meant that it's not at all clear that people are doing something non-algorithmic.