← Back to context

Comment by a_victorp

11 days ago

> Human brains lack any model of intelligence. It's just neurons firing in complicated patterns in response to inputs based on what statistically leads to reproductive success

The fact that you can reason about intelligence is a counter argument to this

> The fact that you can reason about intelligence is a counter argument to this

The fact that we can provide a chain of reasoning, and we can think that it is about intelligence, doesn't mean that we were actually reasoning about intelligence. This is immediately obvious when we encounter people whose conclusions are being thrown off by well-known cognitive biases, like cognitive dissonance. They have no trouble producing volumes of text about how they came to their conclusions and why they are right. But are consistently unable to notice the actual biases that are at play.

  • Humans think they can produce chain-of-reasoing, but it has been shown many times (and is self evident if you pay attention) that your brain is making decisions before you are aware of it.

    If I ask you to think of a movie, go ahead, think of one.....whatever movie just came into your mind was not picked by you, it was served up to you from an abyss.

It seems like LLMs can also reason about intelligence. Does that make them intelligent?

We don't know what intelligence is, or isn't.

  • It's fascinating how this discussion about intelligence bumps up against the limits of text itself. We're here, reasoning and reflecting on what makes us capable of this conversation. Yet, the very structure of our arguments, the way we question definitions or assert self-awareness, mirrors patterns that LLMs are becoming increasingly adept at replicating. How confidently can we, reading these words onscreen, distinguish genuine introspection from a sophisticated echo?

    Case in point… I didn't write that paragraph by myself.

    • So you got help from a natural intelligence? No fair. (natdeo?)

      Someone needs to create a clone site of HN's format and posts, but the rules only permit synthetic intelligence comments. All models pre-prompted to read prolifically, but comment and up/down vote carefully and sparingly, to optimize the quality of discussion.

      And no looking at nat-HN comments.

      It would be very interesting to compare discussions between the sites. A human-lurker per day graph over time would also be of interest.

      Side thought: Has anyone created a Reverse-Captcha yet?

      1 reply →

    • Mistaking model for meaning is the sort of mistake I very rarely see a human make, at least in the sense as here of literally referring to map ("text"), in what ostensibly strives to be a discussion of the presence or absence of underlying territory, a concept the model gives no sign of attempting to invoke or manipulate. It's also a behavior I would expect from something capable of producing valid utterances but not of testing their soundness.

      I'm glad you didn't write that paragraph by yourself; I would be concerned on your behalf if you had.

      2 replies →