Comment by famouswaffles
5 days ago
>But that does not mean that I have to think that the appearance of intelligence always is intelligence, or that an LLM/ Agent is doing what humans do.
You can think whatever you want, but an untestable distinction is an imaginary one.
First of all, that's not true. Not every position has to be empirically justified. I can reason about a position in all sorts of ways without testing. Here's an obvious example that requires no test at all:
1. Functional properties seem to arise from structural properties
2. Brains and LLMs have radically different structural properties
3. Two constructs with radically, fundamentally different structural properties are less likely to have identical functional properties
Therefor, my confidence in the belief that brains and LLMs should have identical functional properties is lowered by some amount, perhaps even just ever so slightly.
Not something I feel like fleshing out or defending, just an example of how I could reason about a position without testing it.
Second, I never said it wasn't testable.
Your reasoning may lower your confidence, but until it connects to observable differences, it is still at least partly a story you are telling yourself.
More importantly, the question is not whether LLMs work the same way human brains do. You may care about that, but many people do not. The relevant question is whether they exhibit the functional properties we care about. Saying “they are structurally different, therefore not really intelligent” is a lot like insisting planes are not really flying because they do not flap like birds.
And on your last point: in practice, it is not testable. There is no decisive intelligence test that sorts all humans into one bucket and all LLMs into another. So if your distinction cannot be cashed out behaviorally, functionally, or empirically, then it starts to look less like a serious difference and more like a metaphysical preference.