Comment by throw310822
19 days ago
> no matter many books you read about riding a bike, you still need to actually get on a bike and do some practice before you can ride it
This is like saying that no matter how much you know theoretically about a foreign language you still need to train your brain to talk it. It has little to do with the reality of that language or the correctness of your model of it, but rather with the need to train realtime circuits to do some work.
Let me try some variations: "no matter how many books you read about ancient history, you need to have lived there before you can reasonably talk about it". "No matter how many books you have read about quantum mechanics, you need to be a particle..."
> It has little to do with the reality of that language or the correctness of your model of it, but rather with the need to train realtime circuits to do some work.
To the contrary, this is purely speculative and almost certainly wrong, riding a bike is co-ordinating the realtime circuits in the right way, and language and a linguistic model fundamentally cannot get you there.
There are plenty of other domains like this, where semantic reasoning (e.g. unquantified syllogistic reasoning) just doesn't get you anywhere useful. I gave an example from cooking later in this thread.
You are falling IMO into exactly the trap of the linguistic reductionist, thinking that language is the be-all and end-all of cognition. Talk to e.g. actual mathematicians, and they will generally tell you they may broadly recruit visualization, imagined tactile and proprioceptive senses, and hard-to-vocalize "intuition". One has to claim this is all epiphenomenal, or that e.g. all unconscious thought is secretly using language, to think that all modeling is fundamentally linguistic (or more broadly, token manipulation). This is not a particularly credible or plausible claim given the ubiquity of cognition across animals or from direct human experiences, so the linguistic boundedness of LLMs is very important and relevant.
Funny, because riding a bicycle or speaking a language is exactly something people don't have a world model of. Ask someone to explain how riding a bicycle works, or an uneducated native speaker to explain the grammar of their language. They have no clue. "Making the right movement at the right time within a narrow boundary of conditions" is a world model, or is it just predicting the next move?
> You are falling IMO into exactly the trap of the linguistic reductionist, thinking that language is the be-all and end-all of cognition.
I'm not saying that at all. I am saying that any (sufficiently long, varied) coherent speech needs a world model, so if something produces coherent speech, there must be a world model behind. We can agree that the model is lacking as much as the language productions are incoherent: which is very little, these days.
> Funny, because riding a bicycle or speaking a language is exactly something people don't have a world model of. Ask someone to explain how riding a bicycle works, or an uneducated native speaker to explain the grammar of their language. They have no clue
This is circular, because you are assuming their world-model of biking can be expressed in language. It can't!
EDIT: There are plenty of skilled experts, artists and etc. that clearly and obviously have complex world models that let them produce best-in-the-world outputs, but who can't express very precisely how they do this. I would never claim such people have no world model or understanding of what they do. Perhaps we have a semantic / definitional issue here?
6 replies →
> Ask someone to explain how riding a bicycle works, or an uneducated native speaker to explain the grammar of their language. They have no clue.
This works against your argument. Someone who can ride a bike clearly knows how to ride a bike, that they cannot express it in tokenized form speaks to the limited scope ofof written word in representing embodiment.
5 replies →
I don't know how you got this so wrong. In control theory you have to build a dynamical system of your plant (machine, factory, etc). If you have a humanoid robot, you not only need to model the robot itself, which is the easy part actually, you have to model everything the robot is interacting with.
Once you understand that, you realize that the human brain has an internal model of almost everything it is interacting with and replicating human level performance requires the entire human brain, not just isolated parts of it. The reason for this is that since we take our brains for granted, we use even the complicated and hard to replicate parts of the brain for tasks that appear seemingly trivial.
When I take out the trash, organic waste needs to be thrown into the trash bin without the plastic bag. I need to untie the trash bag, pinch it from the other side and then shake it until the bag is empty. You might say big deal, but when you have tea bags or potato peels inside, they get caught on the bag handles and get stuck. You now need to shake the bag in very particular ways to dislodge the waste. Doing this with a humanoid robot is basically impossible, because you would need to model every scrap of waste inside the plastic bag. The much smarter way is to make the situation robot friendly by having the robot carry the organic waste inside a portable plastic bin without handles.
> "no matter how many books you read about ancient history, you need to have lived there before you can reasonably talk about it"
Every single time I travel somewhere new, whatever research I did, whatever reviews or blogs I read or whatever videos I watched become totally meaningless the moment I get there. Because that sliver of knowledge is simply nothing compared to the reality of the place.
Everything you read is through the interpretation of another person. Certainly someone who read a lot of books about ancient history can talk about it - but let's not pretend they have any idea what it was actually like to live there.
So you're saying that every time we talk about anything we don't have direct experience of (the past, the future, places we haven't been to, abstract concepts, etc.) we are exactly in the same position as LLMs are now- lacking a real world model and therefore unintelligent?