← Back to context

Comment by freejazz

2 years ago

>The correlations are built into a prediction model. Sometimes those predictions can be near certain, which is indistinguishable from human understanding.

This is quite literally not what the word understanding means, and trying to use my words against me in this way just makes you seem smarmy and butthurt. And if you are going to converse with me like that, I'm not going to engage when your material is a) pointed and aggressive, and b) completely non-responsive to what I wrote.

>You can see this quite clearly when the same neuron lights up for any prompt related to a certain topic. It’s because there’s actual abstraction being done.

Um, what?

> Um, what?

Gotcha you're not actually interested in conversation

  • No, I literally have no idea what you are talking about and how could I? What a projection

    • https://openai.com/research/multimodal-neurons

      When you ask a question to a human that has to do with a concept - in the above article it's Halle Berry because it's a funny discovery, but it could be as broad as science - you can often map those concepts to specific neurons or groups of neurons. Even if the question you ask them doesn't contain the word "science", it still lights up that neuron if it's about science. The same is true of neural networks. They eventually develop neurons that mean something, conceptually.

      It's not always true that the neurons that neural network's develop are the same ones that humans have developed, but it is true that they aren't thinking purely in words, they have a map of how concepts relate and interact with one another. That's a type of meaning and it's a real model of the world, not the same one we have, and it's not even close to perfect, but neither is ours.

      4 replies →