Comment by singron
1 year ago
People are confusing the limited computational model of a transformer with the "Chinese room argument", which leads to unproductive simultaneous debates of computational theory and philosophy.
1 year ago
People are confusing the limited computational model of a transformer with the "Chinese room argument", which leads to unproductive simultaneous debates of computational theory and philosophy.
I'm not confusing anything. I'm familiar with the Chinese Room Argument and I know how LLMs work.
What I'm saying is arguably philosophically related, in that I'm saying the LLM's model is analogous to the "response book" in the room. It doesn't matter how big the book is; if the book never changes, then no learning can happen. If no learning can happen, then understanding, a process that necessarily involves active reflection on a topic, can exist.
You simply can't say a book "understands" anything. To understand is to contemplate and mentally model a topic to the point where you can simulate it, at least at a high level. It's dynamic.
An LLM is static. It can simulate a dynamic response by having multiple stages that dig through an multiple insanely large books of instructions that cross reference each other and that involve calculations and bookmarks and such to come up with a result--but the books never change as part of the conversation.