Comment by johnecheck
9 days ago
While I agree that LLMs are hardly sapient, it's very hard to make this argument without being able to pinpoint what a model of intelligence actually is.
"Human brains lack any model of intelligence. It's just neurons firing in complicated patterns in response to inputs based on what statistically leads to reproductive success"
What's wrong with just calling them smart algorithmic models?
Being smart allows somewhat to be wrong, as long as that leads to a satisfying solution. Being intelligent on the other hand requires foundational correctness in concepts that aren't even defined yet.
EDIT: I also somewhat like the term imperative knowledge (models) [0]
[0]: https://en.wikipedia.org/wiki/Procedural_knowledge
The problem with "smart" is that they fail at things that dumb people succeed at. They have ludicrous levels of knowledge and a jaw dropping ability to connect pieces while missing what's right in front of them.
The gap makes me uncomfortable with the implications of the word "smart". It is orthogonal to that.
>they fail at things that dumb people succeed at
Funnily enough, you can also observe that in humans. The number of times I have observed people from highly intellectual, high income/academic families struggle with simple tasks that even the dumbest people do with ease is staggering. If you're not trained for something and suddenly confronted with it for the first time, you will also in all likelihood fail. "Smart" is just as ill-defined as any other clumsy approach to define intelligence.
Bombs can be smart, even though they sometimes miss the target.
That's not at all on par with what I'm saying.
There exists a generally accepted baseline definition for what crosses the threshold of intelligent behavior. We shouldn't seek to muddy this.
EDIT: Generally its accepted that a core trait of intelligence is an agent’s ability to achieve goals in a wide range of environments. This means you must be able to generalize, which in turn allows intelligent beings to react to new environments and contexts without previous experience or input.
Nothing I'm aware of on the market can do this. LLMs are great at statistically inferring things, but they can't generalize which means they lack reasoning. They also lack the ability to seek new information without prompting.
The fact that all LLMs boil down to (relatively) simple mathematics should be enough to prove the point as well. It lacks spontaneous reasoning, which is why the ability to generalize is key
"There exists a generally accepted baseline definition for what crosses the threshold of intelligent behavior" not really. The whole point they are trying to make is that the capability of these models IS ALREADY muddying the definition of intelligence. We can't really test it because the distribution its learned is so vast. Hence why he have things like ARC now.
Even if its just gradient descent based distribution learning and there is no "internal system" (whatever you think that should look like) to support learning the distribution, the question is if that is more than what we are doing or if we are starting to replicate our own mechanisms of learning.
Peoples’ memories are so short. Ten years ago the “well accepted definition of intelligence” was whether something could pass the Turing test. Now that goalpost has been completely blown out of the water and people are scrabbling to come up with a new one that precludes LLMs.
A useful definition of intelligence needs to be measurable, based on inputs/outputs, not internal state. Otherwise you run the risk of dictating how you think intelligence should manifest, rather than what it actually is. The former is a prescription, only the latter is a true definition.
12 replies →
How does an LLM muddy the definition of intelligence any more than a database or search engine does? They are lossy databases with a natural language interface, nothing more.
15 replies →
> There exists a generally accepted baseline definition for what crosses the threshold of intelligent behavior.
Go on. We are listening.
I think the confusion is because you're referring to a common understanding of what AI is but I think the definition of AI is different for different people.
Can you give your definition of AI? Also what is the "generally accepted baseline definition for what crosses the threshold of intelligent behavior"?
You are doubling down on a muddled vague non-technical intuition about these terms.
Please tell us what that "baseline definition" is.
> Generally its accepted that a core trait of intelligence is an agent’s ability to achieve goals in a wide range of environments.
Be that as it may, a core trait is very different from a generally accepted threshold. What exactly is the threshold? Which environments are you referring to? How is it being measured? What goals are they?
You may have quantitative and unambiguous answers to these questions, but I don't think they would be commonly agreed upon.
> intelligence is an agent’s ability to achieve goals in a wide range of environments. This means you must be able to generalize, which in turn allows intelligent beings to react to new environments and contexts without previous experience or input.
I applaud the bravery of trying to one shot a definition of intelligence, but no intelligent being acts without previous experience or input. If you're talking about in-sample vs out of sample, LLMs do that all the time. At some point in the conversation, they encounter something completely new and react to it in a way that emulates an intelligent agent.
What really makes them tick is language being a huge part of the intelligence puzzle, and language is something LLMs can generate at will. When we discover and learn to emulate the rest, we will get closer and closer to super intelligence.
What is that baseline threshold for intelligence? Could you provide concrete and objective results, that if demonstrated by a computer system would satisfy your criteria for intelligence?
see the edit. boils down to the ability to generalize, LLMs can't generalize. I'm not the only one who holds this view either. Francois Chollet, a former intelligence researcher at Google also shares this view.
18 replies →
LLM’s are statistically great at inferring things? Pray tell me how often Google’s AI search paragraph, at the top, is correct or useful. Is that statistically great?
> Generally its accepted that a core trait of intelligence is an agent’s ability to achieve goals in a wide range of environments.
This is the embodiment argument - that intelligence requires the ability to interact with its environment. Far from being generally accepted, it's a controversial take.
Could Stephen Hawking achieve goals in a wide range of environments without help?
And yet it's still generally accepted that Stephen Hawking was intelligent.
> Human brains lack any model of intelligence. It's just neurons firing in complicated patterns in response to inputs based on what statistically leads to reproductive success
The fact that you can reason about intelligence is a counter argument to this
> The fact that you can reason about intelligence is a counter argument to this
The fact that we can provide a chain of reasoning, and we can think that it is about intelligence, doesn't mean that we were actually reasoning about intelligence. This is immediately obvious when we encounter people whose conclusions are being thrown off by well-known cognitive biases, like cognitive dissonance. They have no trouble producing volumes of text about how they came to their conclusions and why they are right. But are consistently unable to notice the actual biases that are at play.
Humans think they can produce chain-of-reasoing, but it has been shown many times (and is self evident if you pay attention) that your brain is making decisions before you are aware of it.
If I ask you to think of a movie, go ahead, think of one.....whatever movie just came into your mind was not picked by you, it was served up to you from an abyss.
3 replies →
It seems like LLMs can also reason about intelligence. Does that make them intelligent?
We don't know what intelligence is, or isn't.
It's fascinating how this discussion about intelligence bumps up against the limits of text itself. We're here, reasoning and reflecting on what makes us capable of this conversation. Yet, the very structure of our arguments, the way we question definitions or assert self-awareness, mirrors patterns that LLMs are becoming increasingly adept at replicating. How confidently can we, reading these words onscreen, distinguish genuine introspection from a sophisticated echo?
Case in point… I didn't write that paragraph by myself.
5 replies →
The ol' "I know it when I see that it thinks like me" argument.
No offense to johnecheck, but I'd expect an LLM to be able to raise the same counterargument.
> "Human brains lack any model of intelligence. It's just neurons firing in complicated patterns in response to inputs based on what statistically leads to reproductive success"
Are you sure about that ? Do we have proof of that ? In happened all the time trought history of science that a lot of scientists were convinced of something and a model of reality up until someone discovers a new proof and or propose a new coherent model. That’s literally the history of science, disprove what we thought was an established model
Indeed, a good point. My comment assumes that our current model of the human brain is (sufficiently) complete.
Your comment reveals an interesting corollary - those that believe in something beyond our understanding, like the Christian soul, may never be convinced that an AI is truly sapient.
>While I agree that LLMs are hardly sapient, it's very hard to make this argument without being able to pinpoint what a model of intelligence actually is.
Maybe so, but it's trivial to do the inverse, and pinpoint something that's not intelligent. I'm happy to state that an entity which has seen every game guide ever written, but still can't beat the first generation Pokemon is not intelligent.
This isn't the ceiling for intelligence. But it's a reasonable floor.
There's sentient humans who can't beat the first generation pokemon games.
Is there a sentient human that has access to (and actually uses) all of the Pokémon game guides yet is incapable of beating Pokémon?
Because that's what an LLM is working with.
1 reply →
Human brains do way more things than language. And non-human animals (with no language) also reason, and we cannot understand those either, barely even the very simplest ones.
I don't think your detraction has much merit.
If I don't understand how a combustion engine works, I don't need that engineering knowledge to tell you that a bicycle [an LLM] isn't a car [a human brain] just because it fits the classification of a transportation vehicle [conversational interface].
This topic is incredibly fractured because there is too much monetary interest in redefining what "intelligence" means, so I don't think a technical comparison is even useful unless the conversation begins with an explicit definition of intelligence in relation to the claims.
One problem is that we have been basing too much on [human brain] for so long that we ended up with some ethical problems as we decided other brains didn't count as intelligent. As such, science has taken an approach of not assuming humans are uniquely intelligence. We seem to be the best around at doing different tasks with tools, but other animals are not completely incapable of doing the same. So [human brain] should really be [brain]. But is that good enough? Is a fruit fly brain intelligent? Is it a goal to aim for?
There is a second problem that we aren't looking for [human brain] or [brain], but [intelligence] or [sapient] or something similar. We aren't even sure what we want as many people have different ideas, and, as you pointed out, we have different people with different interest pushing for different underlying definitions of what these ideas even are.
There is also a great deal of impreciseness in most any definitions we use, and AI encroaches on this in a way that reality rarely attacks our definitions. Philosophically, we aren't well prepared to defend against such attacks. If we had every ancestor of the cat before us, could we point out the first cat from the last non-cat in that lineup? In a precise way that we would all agree upon that isn't arbitrary? I doubt we could.
If you don't know anything except how words are used, you can definitely disambiguate "bicycle" and "car" solely based on the fact that the contexts they appear in are incongruent the vast majority of the time, and when they appear in the same context, they are explicitly contrasted against each other.
This is just the "fancy statistics" argument again, and it serves to describe any similar example you can come up with better than "intelligence exists inside this black box because I'm vibing with the output".
Why are you attempting to technically analyze a simile? That is not why comparisons are used.
Bicycles and cars are too close. The analogy I like is human leg versus tire. That is a starker depiction of how silly it is to compare the two in terms of structure rather than result.
That is a much better comparison.