← Back to context

Comment by arduanika

2 months ago

Because the damn things are marketed under the word "intelligence". That word used to mean something.

What did it used to mean? I was under the impression that it has always be a little vague.

  • Sure. Language is squishy, and psychometrics is hard. Nevertheless...

    "Intelligence" refers to a basket of different capabilities. Some of them are borderline cases that are hard to define. The stuff that GPT-5 failed to do here is not.

    Things like knowing what a question means, knowing what you know and don't, counting a single digit number of items, or replying with humility if you get stuck -- these are fairly central examples of what a very, very basic intelligence should entail.

It's an umwelt problem. Bats think we're idiots because we don't hear ultrasonic sound, and thus can't echolocate. And we call the LLMs idiots because they consume tokenized inputs, and don't have access to the raw character stream.

  • If you open your mind up too far, your brain will fall out.

    LLMs are not intelligence. There's not some groovy sense in which we and they are both intelligent, just thinking on a different wavelength. Machines do not think.

    We are inundated with this anthropomorphic chatter about them, and need to constantly deprogram ourselves.

  • Do bat's know what senses humans have? Or have the concept of what a human is compared to other organisms or moving objects? What is this analogy?

    • Yeah, I wrote this in a bit too short a hand to meet the critics where they sit...

      There's an immense history of humans studying animal intelligence, which has tended pretty uniformly to find that animals are more intelligent than we previously thought at any given point in time. There's a very long history of badly designed experiments which surface 'false negative' results, and are eventually overturned. A common favor in these experiments is that the design assumes that animals have the same prescriptions and/or interests as humans. (For example, trying to do operant conditioning using a color cue with animals who can't perceive the colors. Or tasks that are easy of you happen to have approachable thumbs... That kind of thing.) Experiments eventually come along which better meet the animals where they are, and find true positive results, and our estimation of the intelligence of animals creeps slightly higher.

      In other words, humans, in testing intelligence, have a decided bias towards only acknowledging intelligence which is distinctly human, and failing to take into account umwelt.

      LLMs have a very different umwelt than we do. If they fail a test which doesn't take that umwelt into account, it doesn't indicate non-intelligence. It is, in fact, very hard to prove non-intelligence, because intelligence is poorly defined. And we have tended consistently to make the definition loftier whenever we're threatened with not being special anymore.

  • > we call the LLMs

    "Dangerous", because they lead into thinking they do the advanced of what they don't do basically.