← Back to context

Comment by yunwal

2 years ago

> Um, what?

Gotcha you're not actually interested in conversation

No, I literally have no idea what you are talking about and how could I? What a projection

  • https://openai.com/research/multimodal-neurons

    When you ask a question to a human that has to do with a concept - in the above article it's Halle Berry because it's a funny discovery, but it could be as broad as science - you can often map those concepts to specific neurons or groups of neurons. Even if the question you ask them doesn't contain the word "science", it still lights up that neuron if it's about science. The same is true of neural networks. They eventually develop neurons that mean something, conceptually.

    It's not always true that the neurons that neural network's develop are the same ones that humans have developed, but it is true that they aren't thinking purely in words, they have a map of how concepts relate and interact with one another. That's a type of meaning and it's a real model of the world, not the same one we have, and it's not even close to perfect, but neither is ours.

    • > they have a map of how concepts relate and interact with one another

      Yeah but not one that operates how Chomsky described. It can't tell you if the earth is flat or not. Humans figured it out. ChatGPT can only tell you what other humans already said. It doesn't matter that it does so based on a neural net. You completely missed the point.

      3 replies →