When you ask a question to a human that has to do with a concept - in the above article it's Halle Berry because it's a funny discovery, but it could be as broad as science - you can often map those concepts to specific neurons or groups of neurons. Even if the question you ask them doesn't contain the word "science", it still lights up that neuron if it's about science. The same is true of neural networks. They eventually develop neurons that mean something, conceptually.
It's not always true that the neurons that neural network's develop are the same ones that humans have developed, but it is true that they aren't thinking purely in words, they have a map of how concepts relate and interact with one another. That's a type of meaning and it's a real model of the world, not the same one we have, and it's not even close to perfect, but neither is ours.
> they have a map of how concepts relate and interact with one another
Yeah but not one that operates how Chomsky described. It can't tell you if the earth is flat or not. Humans figured it out. ChatGPT can only tell you what other humans already said. It doesn't matter that it does so based on a neural net. You completely missed the point.
ChatGPT can tell you the earth is round. You can ask it yourself.
If you’re saying ChatGPT can’t look at the cosmos and deduce it, well it doesn’t have access to visual input, so that’s not the dunk you think it is.
If you’re saying ChatGPT can’t learn from what you tell it, that’s a design decision by openAI, not inherent to machine learning.
There are absolutely models that can do primitive versions of deducing the earth’s roundness, and ChatGPT can deduce things based on text (e.g. you can ask it to play a chess position that’s not in it’s training set and it will give reasonably good answers most of the time).
https://openai.com/research/multimodal-neurons
When you ask a question to a human that has to do with a concept - in the above article it's Halle Berry because it's a funny discovery, but it could be as broad as science - you can often map those concepts to specific neurons or groups of neurons. Even if the question you ask them doesn't contain the word "science", it still lights up that neuron if it's about science. The same is true of neural networks. They eventually develop neurons that mean something, conceptually.
It's not always true that the neurons that neural network's develop are the same ones that humans have developed, but it is true that they aren't thinking purely in words, they have a map of how concepts relate and interact with one another. That's a type of meaning and it's a real model of the world, not the same one we have, and it's not even close to perfect, but neither is ours.
> they have a map of how concepts relate and interact with one another
Yeah but not one that operates how Chomsky described. It can't tell you if the earth is flat or not. Humans figured it out. ChatGPT can only tell you what other humans already said. It doesn't matter that it does so based on a neural net. You completely missed the point.
ChatGPT can tell you the earth is round. You can ask it yourself.
If you’re saying ChatGPT can’t look at the cosmos and deduce it, well it doesn’t have access to visual input, so that’s not the dunk you think it is.
If you’re saying ChatGPT can’t learn from what you tell it, that’s a design decision by openAI, not inherent to machine learning.
There are absolutely models that can do primitive versions of deducing the earth’s roundness, and ChatGPT can deduce things based on text (e.g. you can ask it to play a chess position that’s not in it’s training set and it will give reasonably good answers most of the time).
2 replies →